Asynchronous gRPC with Boost.Asio executors

Overview

asio-grpc

Reliability Rating

This library provides an implementation of boost::asio::execution_context that dispatches work to a grpc::CompletionQueue. Making it possible to write asynchronous gRPC servers and clients using C++20 coroutines, Boost.Coroutines, Boost.Asio's stackless coroutines, std::futures and callbacks. Also enables other Boost.Asio non-blocking IO operations like HTTP requests - all on the same CompletionQueue.

Example

Server side:

boost::asio::awaitable { grpc::ServerContext server_context; helloworld::HelloRequest request; grpc::ServerAsyncResponseWriter writer{&server_context}; bool request_ok = co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service, server_context, request, writer); helloworld::HelloReply response; std::string prefix("Hello "); response.set_message(prefix + request.name()); bool finish_ok = co_await agrpc::finish(writer, response, grpc::Status::OK); }, boost::asio::detached); grpc_context.run(); server->Shutdown(); ">
grpc::ServerBuilder builder;
std::unique_ptr
      server;
helloworld::Greeter::AsyncService service;
agrpc::GrpcContext grpc_context{builder.
     AddCompletionQueue()};
builder.AddListeningPort(
     "0.0.0.0:50051", grpc::InsecureServerCredentials());
builder.RegisterService(&service);
server = builder.BuildAndStart();


     boost::asio::co_spawn(
    grpc_context,
    [&]() -> boost::asio::awaitable
     
      
    {
        grpc::ServerContext server_context;
        helloworld::HelloRequest request;
        grpc::ServerAsyncResponseWriter
      
        writer{&server_context};
        
       bool request_ok = 
       co_await 
       agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service,
                                                  server_context, request, writer);
        helloworld::HelloReply response;
        std::string 
       prefix(
       "Hello ");
        response.
       set_message(prefix + request.
       name());
        
       bool finish_ok = 
       co_await 
       agrpc::finish(writer, response, grpc::Status::OK);
    },
    boost::asio::detached);

grpc_context.run();
server->
       Shutdown();
      
     
    

snippet source | anchor

Client side:

()}; boost::asio::co_spawn( grpc_context, [&]() -> boost::asio::awaitable { grpc::ClientContext client_context; helloworld::HelloRequest request; request.set_name("world"); std::unique_ptr > reader = stub->AsyncSayHello(&client_context, request, agrpc::get_completion_queue(grpc_context)); helloworld::HelloReply response; grpc::Status status; bool ok = co_await agrpc::finish(*reader, response, status); }, boost::asio::detached); grpc_context.run(); ">
auto stub =
    helloworld::Greeter::NewStub(grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials()));
agrpc::GrpcContext grpc_context{std::make_unique
      ()};


      boost::asio::co_spawn(
    grpc_context,
    [&]() -> boost::asio::awaitable
      
       
    {
        grpc::ClientContext client_context;
        helloworld::HelloRequest request;
        request.
       set_name(
       "world");
        std::unique_ptr
       
        
         > reader =
            stub->
         AsyncSayHello(&client_context, request, 
         agrpc::get_completion_queue(grpc_context));
        helloworld::HelloReply response;
        grpc::Status status;
        
         bool ok = 
         co_await 
         agrpc::finish(*reader, response, status);
    },
    boost::asio::detached);

grpc_context.run();
        
       
      
     

snippet source | anchor

Requirements

Tested:

  • gRPC 1.37
  • Boost 1.74
  • MSVC VS 2019 16.11
  • GCC 10.3
  • C++17 or C++20

For MSVC compilers the following compile definitions might need to be set:

BOOST_ASIO_HAS_DEDUCED_REQUIRE_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_EXECUTE_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_EQUALITY_COMPARABLE_TRAIT
BOOST_ASIO_HAS_DEDUCED_QUERY_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_PREFER_MEMBER_TRAIT

Usage

The library can be added to a CMake project using either add_subdirectory or find_package . Once set up, include the following header:

#include <agrpc/asioGrpc.hpp>

As a subdirectory

Clone the repository into a subdirectory of your CMake project. Then add it and link it to your target.

add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc)

As a CMake package

Clone the repository and install it.

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/desired/installation/directory ..
cmake --build . --target install

Locate it and link it to your target.

# Make sure to set CMAKE_PREFIX_PATH to /desired/installation/directory
find_package(asio-grpc)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc)

Performance

asio-grpc is part of grpc_bench. Head over there to compare its performance against other libraries and languages.

Results from the helloworld unary RPC. Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, Linux, Boost 1.74, gRPC 1.30.2, asio-grpc v1.0.0

1 CPU server

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
rust_tonic_mt 44639 22.27 ms 9.63 ms 10.55 ms 572.53 ms 101.12% 16.06 MiB
rust_grpcio 39826 24.95 ms 26.31 ms 27.19 ms 28.45 ms 101.5% 30.46 MiB
rust_thruster_mt 38038 26.17 ms 11.39 ms 12.33 ms 673.02 ms 100.16% 13.17 MiB
cpp_grpc_mt 34954 28.53 ms 31.28 ms 31.75 ms 33.55 ms 101.93% 8.36 MiB
cpp_asio_grpc 34015 29.32 ms 32.05 ms 32.56 ms 34.41 ms 101.35% 7.72 MiB
go_grpc 6772 141.75 ms 287.57 ms 330.45 ms 499.47 ms 97.8% 28.07 MiB

2 CPU server

name req/s avg. latency 90 % in 95 % in 99 % in avg. cpu avg. memory
rust_tonic_mt 66253 14.33 ms 39.24 ms 59.11 ms 91.03 ms 201.2% 16.09 MiB
rust_grpcio 62678 15.38 ms 22.38 ms 24.81 ms 29.00 ms 201.38% 45.07 MiB
cpp_grpc_mt 62488 14.78 ms 31.76 ms 40.60 ms 60.79 ms 199.84% 24.9 MiB
cpp_asio_grpc 62040 14.91 ms 30.17 ms 37.77 ms 60.10 ms 199.6% 26.65 MiB
rust_thruster_mt 59204 16.22 ms 43.04 ms 71.87 ms 110.07 ms 199.31% 13.87 MiB
go_grpc 13978 63.48 ms 110.86 ms 160.62 ms 205.85 ms 198.23% 29.48 MiB

Documentation

The main workhorses of this library are the agrpc::GrpcContext and its executor_type - agrpc::GrpcExecutor.

The agrpc::GrpcContext implements boost::asio::execution_context and can be used as an argument to Boost.Asio functions that expect an ExecutionContext like boost::asio::spawn.

Likewise, the agrpc::GrpcExecutor models the Executor and Networking TS requirements and can therefore be used in places where Boost.Asio expects an Executor.

This library's API for RPCs is modeled closely after the asynchronous, tag-based API of gRPC. As an example, the equivalent for grpc::ClientAsyncReader .Read(helloworld::HelloReply*, void*) would be agrpc::read(grpc::ClientAsyncReader &, helloworld::HelloReply&, CompletionToken) . It can therefore be helpful to refer to async_unary_call.h and async_stream.h while working with this library.

Instead of the void* tag in the gRPC API the functions in this library expect a CompletionToken. Boost.Asio comes with several CompletionTokens out of the box: C++20 coroutine, std::future, stackless coroutine, callback and Boost.Coroutine.

Getting started

Start by creating a agrpc::GrpcContext.

For servers and clients:

grpc::ServerBuilder builder;
agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};

snippet source | anchor

For clients only:

agrpc::GrpcContext grpc_context{std::make_unique
   ()};
  

snippet source | anchor

Add some work to the grpc_context (shown further below) and run it. Make sure to shutdown the server before destructing the grpc_context. Also destruct the grpc_context before destructing the server. A grpc_context can only be run on one thread at a time.

grpc_context.run();
server->Shutdown();
}  // grpc_context is destructed here before the server

snippet source | anchor

It might also be helpful to create a work guard before running the agrpc::GrpcContext to prevent grpc_context.run() from returning early.

auto guard = boost::asio::make_work_guard(grpc_context);

snippet source | anchor

Alarm

gRPC provides a grpc::Alarm which similar to boost::asio::steady_timer. Simply construct it and pass to it agrpc::wait with the desired deadline to wait for the specified amount of time without blocking the event loop.

grpc::Alarm alarm;
bool wait_ok = agrpc::wait(alarm, std::chrono::system_clock::now() + std::chrono::seconds(1), yield);

snippet source | anchor

wait_ok is true if the Alarm expired, false if it was canceled. (source)

Unary RPC Server-Side

Start by requesting a RPC. In this example yield is a boost::asio::yield_context, other CompletionTokens are supported as well, e.g. boost::asio::use_awaitable. The example namespace has been generated from example.proto.

grpc::ServerContext server_context;
example::v1::Request request;
grpc::ServerAsyncResponseWriter
    writer{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestUnary, service, server_context,
                                 request, writer, yield);
  

snippet source | anchor

If request_ok is true then the RPC has indeed been started otherwise the server has been shutdown before this particular request got matched to an incoming RPC. For a full list of ok-values returned by gRPC see CompletionQueue::Next.

The grpc::ServerAsyncResponseWriter is used to drive the RPC. The following actions can be performed.

bool send_ok = agrpc::send_initial_metadata(writer, yield);

example::v1::Response response;
bool finish_ok = agrpc::finish(writer, response, grpc::Status::OK, yield);

bool finish_with_error_ok = agrpc::finish_with_error(writer, grpc::Status::CANCELLED, yield);

snippet source | anchor

Unary RPC Client-Side

On the client-side a RPC is initiated by calling the desired AsyncXXX function of the Stub

grpc::ClientContext client_context;
example::v1::Request request;
std::unique_ptr
   
    > reader =
    stub.AsyncUnary(&client_context, request, agrpc::get_completion_queue(grpc_context));
   
  

snippet source | anchor

The grpc::ClientAsyncResponseReader is used to drive the RPC.

bool read_ok = agrpc::read_initial_metadata(*reader, yield);

example::v1::Response response;
grpc::Status status;
bool finish_ok = agrpc::finish(*reader, response, status, yield);

snippet source | anchor

For the meaning of read_ok and finish_ok see CompletionQueue::Next.

Client-Streaming RPC Server-Side

Start by requesting a RPC.

grpc::ServerContext server_context;
grpc::ServerAsyncReader
    reader{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestClientStreaming, service,
                                 server_context, reader, yield);
  

snippet source | anchor

Drive the RPC with the following functions.

bool send_ok = agrpc::send_initial_metadata(reader, yield);

example::v1::Request request;
bool read_ok = agrpc::read(reader, request, yield);

example::v1::Response response;
bool finish_ok = agrpc::finish(reader, response, grpc::Status::OK, yield);

snippet source | anchor

Client-Streaming RPC Client-Side

Start by requesting a RPC.

grpc::ClientContext client_context;
example::v1::Response response;
std::unique_ptr
   
    > writer;

    bool request_ok = agrpc::request(&example::v1::Example::Stub::AsyncClientStreaming, stub, client_context, writer,
                                 response, yield);
   
  

snippet source | anchor

There is also a convenience overload that returns the grpc::ClientAsyncWriter at the cost of a sizeof(std::unique_ptr) memory overhead.

auto [writer, request_ok] =
    agrpc::request(&example::v1::Example::Stub::AsyncClientStreaming, stub, client_context, response, yield);

snippet source | anchor

With the grpc::ClientAsyncWriter the following actions can be performed to drive the RPC.

bool read_ok = agrpc::read_initial_metadata(*writer, yield);

example::v1::Request request;
bool write_ok = agrpc::write(*writer, request, yield);

bool writes_done_ok = agrpc::writes_done(*writer, yield);

grpc::Status status;
bool finish_ok = agrpc::finish(*writer, status, yield);

snippet source | anchor

For the meaning of read_ok, write_ok, writes_done_ok and finish_ok see CompletionQueue::Next.

Server-Streaming RPC Server-Side

Start by requesting a RPC.

grpc::ServerContext server_context;
example::v1::Request request;
grpc::ServerAsyncWriter
    writer{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestServerStreaming, service,
                                 server_context, request, writer, yield);
  

snippet source | anchor

With the grpc::ServerAsyncWriter the following actions can be performed to drive the RPC.

bool send_ok = agrpc::send_initial_metadata(writer, yield);

example::v1::Response response;
bool write_ok = agrpc::write(writer, response, yield);

bool write_and_finish_ok = agrpc::write_and_finish(writer, response, grpc::WriteOptions{}, grpc::Status::OK, yield);

bool finish_ok = agrpc::finish(writer, grpc::Status::OK, yield);

snippet source | anchor

For the meaning of send_ok, write_ok, write_and_finish and finish_ok see CompletionQueue::Next.

Server-Streaming RPC Client-Side

Start by requesting a RPC.

grpc::ClientContext client_context;
example::v1::Request request;
std::unique_ptr
   
    > reader;

    bool request_ok =
    
    agrpc::request(&example::v1::Example::Stub::AsyncServerStreaming, stub, client_context, request, reader, yield);
   
  

snippet source | anchor

There is also a convenience overload that returns the grpc::ClientAsyncReader at the cost of a sizeof(std::unique_ptr) memory overhead.

auto [reader, request_ok] =
    agrpc::request(&example::v1::Example::Stub::AsyncServerStreaming, stub, client_context, request, yield);

snippet source | anchor

With the grpc::ClientAsyncReader the following actions can be performed to drive the RPC.

bool read_metadata_ok = agrpc::read_initial_metadata(*reader, yield);

example::v1::Response response;
bool read_ok = agrpc::read(*reader, response, yield);

grpc::Status status;
bool finish_ok = agrpc::finish(*reader, status, yield);

snippet source | anchor

For the meaning of read_metadata_ok, read_ok and finish_ok see CompletionQueue::Next.

Bidirectional-Streaming RPC Server-Side

Start by requesting a RPC.

grpc::ServerContext server_context;
grpc::ServerAsyncReaderWriter
    reader_writer{&server_context};

   bool request_ok = agrpc::request(&example::v1::Example::AsyncService::RequestBidirectionalStreaming, service,
                                 server_context, reader_writer, yield);
  

snippet source | anchor

With the grpc::ServerAsyncReaderWriter the following actions can be performed to drive the RPC.

bool send_ok = agrpc::send_initial_metadata(reader_writer, yield);

example::v1::Request request;
bool read_ok = agrpc::read(reader_writer, request, yield);

example::v1::Response response;
bool write_and_finish_ok =
    agrpc::write_and_finish(reader_writer, response, grpc::WriteOptions{}, grpc::Status::OK, yield);

bool write_ok = agrpc::write(reader_writer, response, yield);

bool finish_ok = agrpc::finish(reader_writer, grpc::Status::OK, yield);

snippet source | anchor

For the meaning of send_ok, read_ok, write_and_finish_ok, write_ok and finish_ok see CompletionQueue::Next.

Bidirectional-Streaming RPC Client-Side

Start by requesting a RPC.

grpc::ClientContext client_context;
std::unique_ptr
   
    > reader_writer;

    bool request_ok = agrpc::request(&example::v1::Example::Stub::AsyncBidirectionalStreaming, stub, client_context,
                                 reader_writer, yield);
   
  

snippet source | anchor

There is also a convenience overload that returns the grpc::ClientAsyncReaderWriter at the cost of a sizeof(std::unique_ptr) memory overhead.

auto [reader_writer, request_ok] =
    agrpc::request(&example::v1::Example::Stub::AsyncBidirectionalStreaming, stub, client_context, yield);

snippet source | anchor

With the grpc::ClientAsyncReaderWriter the following actions can be performed to drive the RPC.

bool read_metadata_ok = agrpc::read_initial_metadata(*reader_writer, yield);

example::v1::Request request;
bool write_ok = agrpc::write(*reader_writer, request, yield);

bool writes_done_ok = agrpc::writes_done(*reader_writer, yield);

example::v1::Response response;
bool read_ok = agrpc::read(*reader_writer, response, yield);

grpc::Status status;
bool finish_ok = agrpc::finish(*reader_writer, status, yield);

snippet source | anchor

For the meaning of read_metadata_ok, write_ok, writes_done_ok, read_ok and finish_ok see CompletionQueue::Next.

Comments
  • c++20 coroutine based version is not as fast as the one using boost fiber?

    c++20 coroutine based version is not as fast as the one using boost fiber?

    Recently I ran the grpc_bench to compare the performance of different settings. I found the coroutine based one is slower than both the boost fiber version and the grpc multi-thread version. Do you have any insight about this?

    opened by npuichigo 18
  • Which compile definitions are recommended for clang?

    Which compile definitions are recommended for clang?

    Which compile definitions are recommended for clang build on Linux?

    I already figured out that I need to set on my cmake line:

    -DASIO_GRPC_USE_BOOST_CONTAINER=1

    But I wonder if there are any others?

    I'm using a slightly older version of asio-grpc, it would take me a little effort to update so don't want to have to do that. I'm using commit a17b559b101d10836c0fc226101f857728f3428f. Don't know if this is the reason?

    The reason that I ask is that when I use clang 10.0.1 I get a clang crash when trying to build hello-world-server-cpp20:

    /gitworkspace/distributions/clang/10.0.1/bin/clang++ -DBOOST_ALL_NO_LIB -DCARES_STATICLIB -I/gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/native.build/src/generated -isystem /gitworkspace/rbresali/mdt/mdt_stage/native.stage/usr/local/include -stdlib=libc++ -fPIC -g -Wall -Werror -DBOOST_THREAD_VERSION=4 -save-temps=obj -std=gnu++2a -MD -MT src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -MF src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o.d -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -c /gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/src/hello-world-server-cpp20.cpp
    Stack dump:
    0.	Program arguments: /vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10 -cc1 -triple x86_64-unknown-linux-gnu -S -save-temps=obj -disable-free -disable-llvm-verifier -discard-value-names -main-file-name hello-world-server-cpp20.cpp -mrelocation-model pic -pic-level 2 -mthread-model posix -mframe-pointer=all -fmath-errno -fno-rounding-math -masm-verbose -mconstructor-aliases -munwind-tables -target-cpu x86-64 -dwarf-column-info -fno-split-dwarf-inlining -debug-info-kind=limited -dwarf-version=4 -debugger-tuning=gdb -resource-dir /vol/dwdmgit_distributions/clang/10.0.1/lib64/clang/10.0.1 -Wall -Werror -std=gnu++2a -fdebug-compilation-dir /gitworkspace/rbresali-mdt_211028.110446/mdt_example/asio_grpc_example/native.build -ferror-limit 19 -fmessage-length 0 -fgnuc-version=4.2.1 -fobjc-runtime=gcc -fdiagnostics-show-option -faddrsig -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.s -x ir src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc 
    1.	Code generation
    2.	Running pass 'Function Pass Manager' on module 'src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc'.
    3.	Running pass 'X86 DAG->DAG Instruction Selection' on function '@"_ZN5boost4asio6detail20co_spawn_entry_pointINS0_15any_io_executorEZ4mainE3$_0NS1_16detached_handlerEEENS0_9awaitableINS1_28awaitable_thread_entry_pointET_EEPNS6_IvS8_EES8_T0_T1_"'
     #0 0x00000000016b6e24 PrintStackTraceSignalHandler(void*) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b6e24)
     #1 0x00000000016b4b8e llvm::sys::RunSignalHandlers() (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b4b8e)
     #2 0x00000000016b7225 SignalHandler(int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b7225)
     #3 0x00007f5f529b1630 __restore_rt (/lib64/libpthread.so.0+0xf630)
     #4 0x00000000021de359 llvm::DAGTypeLegalizer::getTableId(llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de359)
     #5 0x00000000021de216 llvm::DAGTypeLegalizer::RemapValue(llvm::SDValue&) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de216)
     #6 0x00000000021dd97f llvm::DAGTypeLegalizer::ReplaceValueWith(llvm::SDValue, llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21dd97f)
     #7 0x00000000021e01ac llvm::DAGTypeLegalizer::DisintegrateMERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21e01ac)
     #8 0x0000000002236e99 llvm::DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x2236e99)
     #9 0x00007ffe5377a490 
    clang-10: error: unable to execute command: Segmentation fault (core dumped)
    clang-10: error: clang frontend command failed due to signal (use -v to see invocation)
    clang version 10.0.1 
    Target: x86_64-unknown-linux-gnu
    Thread model: posix
    InstalledDir: /gitworkspace/distributions/clang/10.0.1/bin
    clang-10: note: diagnostic msg: PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace, preprocessed source, and associated run script.
    clang-10: note: diagnostic msg: 
    ********************
    
    PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
    Preprocessed source(s) and associated run script(s) are located at:
    clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.cpp
    clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.sh
    clang-10: note: diagnostic msg: 
    
    ********************
    
    
    opened by rbresalier 12
  • How to get notified when client close

    How to get notified when client close

    Suppose I have a server streaming rpc:

    rpc ServerStream(Req) returns (stream Resp);
    

    When a client calls ServerStream, the server does some bookkeeping; when the client disconnects, the bookkeeping needs to be removed. Is there an API, let's say on_recv_client_close(release_function), that can register a callback for a client closed event?

    Thank you.

    P.S.

    • I know I can use failed state of the agrpc::write to indicate that client is closed. But I want to get notified even when the server doesn't send anything.
    • grpc core has a GRPC_OP_RECV_CLOSE_ON_SERVER op that I don't know whether it helps.
    • Here is a similar question I found but for grpc-go: https://stackoverflow.com/questions/39825671/grpc-go-how-to-know-in-server-side-when-client-closes-the-connection
    opened by 4eUeP 11
  • Compiler error trying to use asio::experimental::use_promise as completion token

    Compiler error trying to use asio::experimental::use_promise as completion token

    I'm trying to use a promise as the completion token to agrpc methods, and getting a compiler error using gcc 11. Here is a minimal example, based off your streaming-server.cpp example:

    // additional includes required:
    #include <asio/experimental/promise.hpp>
    #include <asio/this_coro.hpp>
    
    asio::awaitable<void> handle_bidirectional_streaming_request(example::v1::Example::AsyncService& service)
    {
        grpc::ServerContext server_context;
        grpc::ServerAsyncReaderWriter<example::v1::Response, example::v1::Request> reader_writer{&server_context};
        bool request_ok = co_await agrpc::request(&example::v1::Example::AsyncService::RequestBidirectionalStreaming,
                                                  service, server_context, reader_writer);
        if (!request_ok)
        {
            // Server is shutting down.
            co_return;
        }
        example::v1::Request request;
    
        // none of the below work to put as COMPLETIONTOKEN - the following line fails to compile:
        // asio::experimental::use_promise
        // asio::experimental::use_promise_t<agrpc::GrpcContext>{}
        // asio::experimental::use_promise_t<agrpc::GrpcContext::executor_type>{}
        // asio::experimental::use_promise_t<agrpc::s::BasicGrpcExecutor<>>{}
        // asio::experimental::use_promise_t<asio::this_coro::executor_t>{}
        auto&& read_promise = agrpc::read(reader_writer, COMPLETIONTOKEN);
    
        co_await read_promise.async_wait(asio::use_awaitable);
    }
    

    The use case is that later in the function I would simultaneously await any of 3 conditions: New request from client, finished writing response to client, or new response ready from data processing thread pool:

    auto&& write_promise = agrpc::write(rw, response, COMPLETIONTOKEN);
    auto&& data_ready_promise = // asynchronously dispatch work to data processing thread pool
    auto rwd_promise = asio::experimental::promise<>::all(
        std::forward<decltype(read_promise)>(read_promise),
        std::forward<decltype(write_promise)>(write_promise),
        std::forward<decltype(data_ready_promise)>(data_ready_promise)
    );
    std::tie(read_ok, write_ok, data_ready_ok) = co_await rwd_promise.async_wait(asio::use_awaitable);
    
    opened by muusbolla 11
  • Multi-threaded server and health check

    Multi-threaded server and health check

    Hi,

    I tried test example/multi-threaded-server with enabled DefaultHealthCheckService and found that after I set grpc::EnableDefaultHealthCheckService(true) before start server all handlers worked in only one thread. What could be causing this and how to fix it?

    opened by nikkov 10
  • Using asio::io_context in single threaded applications

    Using asio::io_context in single threaded applications

    Hi, Thank you for your awesome library. It is a very convenient way to use asio for writing single threaded (but concurrent) applications without worrying about problems in multi threaded applications. If i'm not mistaken, right now the only way to use agrpc is to instantiate GrpcContext and run it on its own thread, which means we need to run asio::io_context on a separate thread and deal with concurrency problems between them. Is there any plan for making it possible to reuse asio::io_context for agrpc services?

    opened by ashtum 10
  • Theads and asio-grpc

    Theads and asio-grpc

    Thank you for implementing this excellent project to provide a consolidated way of executing async grpc command and send/receive tcp packages asynchronously with boost asio library. I just begin to use boost asio recently and have a couple of quesions when using this library.

    According to this link: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/threads.html, multiple threads may call io_context::run() to set up a threads pool and the io_context may distribute work across them. Dose asio-grpc's execution_context also guarantee thread safety if threads pool is enabled on it? I am using C++20 coroutines and assuming that each co_spawn will locate a thread from the threads pool and run the composed asynchronous operations. Correct me if my understanding is wrong. What if the composed asynchronous operations contains a blocking operation, it may block the running thread and how can I prevent the other co_spawn call to use the blocked thread for execution? In additional, co_spawn could spawn from both execution_context and excutor. I am guessing that if spawn from execution_context it will locate a new thread and run while from excutor, it will just run on the thread that the excutor is running. Is my guessing correct?

    Meanwhile #8 mentions that if co_spawn non-grpc async operation like steady_timer from grpc_context, it will automatically spawns a 2nd io_context thread. So it seems that asio-grpc internally maintain two threads for both grpc execution_context and io_context to run async grpc operations and other async non-grpc operations. And the last comments says version 1.4 would also support ask io_context for a agrpc::GrpcContext. Considering my application would serve many clients and for each client's requst issue one single composed asynchronous operations containing one async grpc call and several async tcp read&write to the server call and response back to the client, will asio-grpc guarantee there won't have interleave between the grpc operation and the tcp operations when the single composed asynchronous operation is co_spawned from either grpc_context or io_context since they are from two context on two thread? Also does asio-grpc support the mode of having threads pool for io_context and single thread for grpc_context or both have threads pool enabled?

                          one single composed asynchronous operations
                         /                                            \
    client1 --> { co_wait async grpc operation, co_wait async tcp operations } --> server 
    client2 --> { co_wait async grpc operation, co_wait async tcp operations } --> server
    clientN ...
    

    Hope to get some guidence from you. Thanks.

    opened by vangork 9
  • Is there a way to build helloworld server code

    Is there a way to build helloworld server code

    I tried extensively with following code. Get compilation error as in this post. any help is appreciated. Thanks you @Tradias

    Code

        15  #include "zprobe.grpc.pb.h"
        16
        17  #include <agrpc/asio_grpc.hpp>
        18  #include <boost/asio/co_spawn.hpp>
        19  #include <boost/asio/detached.hpp>
        20  #include <boost/asio/signal_set.hpp>
        21  #include <grpcpp/server.h>
        22  #include <grpcpp/server_builder.h>
        23
        24  #include <optional>
        25  #include <thread>
        26  namespace asio = boost::asio;
        27
        28
        29  // begin-snippet: server-side-helloworld
        30  // ---------------------------------------------------
        31  // Server-side hello world which handles exactly one request from the client before shutting down.
        32  // ---------------------------------------------------
        33  // end-snippet
        34  int main(int argc, const char** argv)
        35  {
        36      const auto port = argc >= 2 ? argv[1] : "50051";
        37      const auto host = std::string("0.0.0.0:") + port;
        38
        39      std::unique_ptr<grpc::Server> server;
        40
        41      grpc::ServerBuilder builder;
        42      agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};
        43      builder.AddListeningPort(host, grpc::InsecureServerCredentials());
        44      zprobe::ProbeService::AsyncService service;
        45      builder.RegisterService(&service);
        46      server = builder.BuildAndStart();
        47
        48      asio::co_spawn(
        49          grpc_context,
        50          [&]() -> asio::awaitable<void>
        51          {
        52              grpc::ServerContext server_context;
        53              helloworld::HelloRequest request;
        54              grpc::ServerAsyncResponseWriter<helloworld::HelloReply> writer{&server_context};
        55              co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service, server_context,
        56                                      request, writer, asio::use_awaitable);
        57              helloworld::HelloReply response;
        58              response.set_message("Hello " + request.name());
        59              co_await agrpc::finish(writer, response, grpc::Status::OK, asio::use_awaitable);
        60          },
        61          asio::detached);
        62
        63      grpc_context.run();
        64
        65      server->Shutdown();
        66  }```
    
    **CMAKE -- Success**
     cmake  .. "-DCMAKE_TOOLCHAIN_FILE=~/tools/vcpkg/scripts/buildsystems/vcpkg.cmake"  "-DCMAKE_PREFIX_PATH=$MY_INSTALL_DIR"
    
    **CMakeLists.txt**
    ```target_link_libraries(zprobe
      PUBLIC zprobe_grpc_proto
      ${_REFLECTION}
      ${_GRPC_GRPCPP}
      ${_PROTOBUF_LIBPROTOBUF}
      asio-grpc::asio-grpc-standalone-asio)
    

    ERROR on make

    [ 83%] Building CXX object CMakeFiles/zprobe.dir/grpc_asio_server.cpp.o
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp:26:29: error: ‘namespace asio = boost::boost::asio;’ conflicts with a previous declaration
       26 | namespace asio = boost::asio;
          |                             ^
    In file included from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/execution/allocator.hpp:19,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/execution.hpp:18,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/any_io_executor.hpp:22,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/detail/asio_forward.hpp:24,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/detail/default_completion_token.hpp:18,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/default_completion_token.hpp:19,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/alarm.hpp:18,
                     from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/asio_grpc.hpp:33,
                     from /home/bbhushan/work/zprobe/grpc_asio_server.cpp:17:
    /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/detail/type_traits.hpp:51:11: note: previous declaration ‘namespace asio { }’
       51 | namespace asio {
          |           ^~~~
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp: In function ‘int main(int, const char**)’:
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp:48:11: error: ‘co_spawn’ is not a member of ‘asio’
       48 |     asio::co_spawn(
          |           ^~~~~~~~
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:24: error: ‘awaitable’ in namespace ‘asio’ does not name a template type
       50 |         [&]() -> asio::awaitable<void>
          |                        ^~~~~~~~~
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:33: error: expected ‘{’ before ‘<’ token
       50 |         [&]() -> asio::awaitable<void>
          |                                 ^
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:34: error: expected primary-expression before ‘void’
       50 |         [&]() -> asio::awaitable<void>
          |                                  ^~~~
    /home/bbhushan/work/zprobe/grpc_asio_server.cpp:61:15: error: ‘detached’ is not a member of ‘asio’; did you mean ‘boost::asio::detached’?
       61 |         asio::detached);
          |               ^~~~~~~~
    In file included from /home/bbhushan/work/zprobe/grpc_asio_server.cpp:19:
    /home/bbhushan/tools/vcpkg/installed/x64-linux/include/boost/asio/detached.hpp:103:22: note: ‘boost::asio::detached’ declared here
      103 | constexpr detached_t detached;
          |                      ^~~~~~~~
    make[2]: *** [CMakeFiles/zprobe.dir/build.make:76: CMakeFiles/zprobe.dir/grpc_asio_server.cpp.o] Error 1
    make[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/zprobe.dir/all] Error 2
    make: *** [Makefile:91: all] Error 2
    
    opened by bharat76 6
  • Question: is it possible to implement a server to client request, using a bidirectional-streaming channel and exposed as a standard C++ class/interface?

    Question: is it possible to implement a server to client request, using a bidirectional-streaming channel and exposed as a standard C++ class/interface?

    Hi,

    Thank you for writing this library.

    I'm currently trying trying to use asio-grpc for implementing a service that, as a part of a request, can call back to a connected client to get additional data - dependency injection style. This dependency injection channel is a long-living bidirectional streaming grpc call. My problem is that the server-logic is calling into a normal pure virtual class (interface) for requesting these values. AFAIK this rules out using co_await, co_return since this would imply my interface should return a coroutine. So I'm trying to figure out if I can implement such interface by using co_yield, where the consumer of the values does not need to be a coroutine.

    The server-logic is being triggered by another async grpc call, but the server-logic itself is not async.

    I hope someone is able to help me to figure out if and how this is possible. Let me know if my description is not clear enough.

    Best regards

    opened by OleStauning 6
  • compiling problem with versions installed by vcpkg

    compiling problem with versions installed by vcpkg

    Hi, thanks for creating a wonderful framework. It has made my life much easier.

    I have used your framework for a few months and now I need to set up our project on a new machine. The installation is successful with the following command

    ./vcpkg install asio-grpc[boost-container]:x64-linux
    

    However, when I compile my project, I got the following errors. image image image image

    Do you have any idea why does this error happen?

    I look forward to hearing from you soon.

    Thanks

    opened by khanhha 6
  • assertion GRPC_CALL_ERROR_TOO_MANY_OPERATIONS in the example server code

    assertion GRPC_CALL_ERROR_TOO_MANY_OPERATIONS in the example server code

    Hi, I am trying to run the example-server.cpp and example-client.cpp example from the version 1.1.2 [installed using vcpkg with boost container feature].

    I got an assertion error GRPC_CALL_ERROR_TOO_MANY_OPERATIONS at the following line in the server code image

    here is the stack trace when the assertion occurs. image

    here is the detail information about the assertion. image

    do you have any idea why the bug happens?

    opened by khanhha 6
  • generic CMake on Linux

    generic CMake on Linux

    I'm hitting multiple issues using generic CMake (no package managers) on Fedora 36 and Ubuntu 22. I can make it work with changes.

    Are you open to CMake changes/PRs?

    opened by jwinarske 9
  • cmake

    cmake "find asio" weirdness

    Hi,

    Asio typically comes with boost, and boost does not install a "Findasio.cmake" file. This causes a cmake faile. (I'm using v3.25.1.) Chris Kohlhoff does not provide cmake files either.

    To install through cmake, I ended up patching cmake/AsioGrpcFindPackages.cmake, replacing find_package(asio) with:

    SET(_asio_grpc_asio_root "${CMAKE_PREFIX_PATH}/include/boost")
    

    Note that CMAKE_PREFIX_PATH/include/boost is the most likely place to find the header asio.hpp.

    opened by aaron-michaux 1
  • Clang 14 and 15 build error

    Clang 14 and 15 build error

    Hello. Thanks for the library!

    asio-grpc/src/agrpc/detail/memory_resource.hpp:26:10: fatal error: 'memory_resource' file not found
    #include <memory_resource>
             ^~~~~~~~~~~~~~~~~
    1 error generated.
    

    I have this include in #include <experimental/memory_resource>

    opened by ivan-volnov 8
  • High-level server API

    High-level server API

    • I/O object for server-side requests: unary and streaming. Similar to the high-level client API.
    • Figure out what API to provide for attaching request handler. E.g. introspect one user-provided ServiceHandler class and register repeatedly_request for all of them automatically. Or let the user register a handler per endpoint themselves, like with repeatedly_request at the moment.
    • Nicely integrate with tracing/metrics/logging/load balancing, like opentelemetry, opencensus, ORCA, xDS, etc.
    • Consider owning the grpc::CompletionQueue and grpc::Server to provide clean shutdown and multi-threading
    • Allow pre-request ServerContext configuration, e.g. to enable compression
    • Support AsyncNotifyWhenDone
    enhancement 
    opened by Tradias 0
Releases(v2.4.0)
  • v2.4.0(Dec 29, 2022)

    Features

    • Add cancellation support to agrpc::RPC.
    • Add a new install option to use asio::recycling_allocator instead of <memory_resource> or Boost.Container. This can be handy in combination with libc++ and standalone Asio to avoid taking a dependency on Boost.
    • Deprecated: Constructor agrpc::GrpcContext{std::make_unique<grpc::CompletionQueue>()}, use the new default constructor instead.
    • Deprecated: Member function agrpc::CancelSafe::is_running has been renamed to is_wait_pending() because it never reported whether the asynchronous operation is still running but instead whether a wait is currently pending.

    Fixes

    • A failed write with grpc::WriteOptions::set_last_message on a high-level client streaming RPC incorrectly completes with true.
    • The operation state obtained by connecting a receiver to the sender returned by agprc::notify_when_done is deallocated during GrpcContext destruction, leading to double free because the lifetime of the operation state is expected to be handled by the connector.
    • Using asio::deferred for agrpc::RPC::finish and agrpc::RPC::writes_done does not compile.
    • The rvalue-overload of agrpc::Alarm::wait does not extend the lifetime of the alarm correctly when using completion tokens like asio::deferred.
    • Honor ASIO_GRPC_USE_BOOST_CONTAINER CMake variable when using asio-grpc with add_subdirectory.
    • Compatibility of sender/receiver with Asio 1.25/Boost 1.81.

    Performance

    • Improve performance of high-level client read_initial_metadata, read, write and finish by replacing some if-conditions with compile time dispatch.
    • Reduce allocation size of each health check watch request.
    • Reduce size of all sender operation states by one pointer and the notify_when_done operation state by two pointers.

    Documentation

    • Correctly state that agrpc::RPC::read_initial_metadata may not be called concurrently with write.

    Chore

    • Adjust .clang-tidy rules.
    • Fix CMAKE_UNITY_BUILD and enable it for some CI builds.
    • Add test for add_subdirectory-consumption of asio-grpc.
    • Update doxygen and doxygen-awesome.
    Source code(tar.gz)
    Source code(zip)
  • v2.3.0(Nov 6, 2022)

    Features

    Chore

    • Update libunifex to 2022-10-10 and gRPC to 1.50.1
    Source code(tar.gz)
    Source code(zip)
  • v2.2.0(Oct 20, 2022)

    Features

    • Add agrpc::RPC<>::service/method_name():
    package example.v1;
     
    service Example { rpc Unary(Request) returns (Response) {} }
    
    using RPC = agrpc::RPC<&example::v1::Example::Stub::PrepareAsyncUnary>;
    static_assert(RPC::service_name() == "example.v1.Example");
    static_assert(RPC::method_name() == "Unary");
    
    • Add agrpc::Alarm, an I/O object that wraps grpc::Alarm. A safer alternative to the agrpc::wait free function. Additionally supports ad-hoc waits which automatically extend the lifetime of the underlying gRPC alarm, for example with a callback:
    agrpc::Alarm(grpc_context).wait(deadline, [](bool ok, agrpc::Alarm&& alarm) {});
    

    Fixes

    • Leak of uncompleted sender operation states and asynchronous operations started by the high-level client API upon destruction of the GrpcContext.
    • For each request in the sender overload of agrpc::repeatedly_request, make a copy of the request handler to avoid lifetime surprises when the handler returns unifex::task<>.

    Performance

    • Improve compile times by no longer instantiating entire operation states with two different allocators, but instead only their completion function. This affects all free functions, like agrpc::read and agrpc::wait, as well as the high-level client API.

    Chore

    • Use a more meticulous gRPC shutdown sequence in tests to make them less flaky.
    • Update tests and examples to Boost 1.80.
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Sep 6, 2022)

    Features

    • Add a high-level client API - a new major feature that makes writing asynchronous gRPC clients easier and safer.
    • In asio_grpc_protobuf_generate, check the validity of IMPORT_DIRS at CMake configure time instead of build time.
    • The asio-grpc source files can now be consumed without CMake. In that case, the compile definition AGRPC_USE_BOOST_CONTAINER can be used to choose between <memory_resource> and Boost.Container.

    Fixes

    • Enable cancellation support for standalone Asio 1.19.0 instead of 1.20.0. Note that throwing exceptions from the request handler of agrpc::repeatedly_request crashes until 1.19.2.
    • Correctly handle .proto filenames that include dots in asio_grpc_protobuf_generate.
    • Return correct sender from asio::execution::schedule(GrpcExecutor) when compiling in C++17 using MSVC.

    Performance

    • Turn cancellation check in agrpc::repeatedly_request to a no-op when the completion handler does not support cancellation.
    • Replace most calls to std::move and std::forward with static_cast and use a simpler version of std::tuple to improve compile times.

    Documentation

    • State that read_initial_metadata will not complete until the server calls write_initial_metadata or the client performs a write/finish.
    • Add generic bidirectional streaming example.
    • Add dark mode to documentation website.

    Chore

    • No longer run CodeQL in Github Actions since it stopped working and never diagnosed anything useful.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Jul 24, 2022)

    Breaking change: All headers now use snake_case instead of camelCase, e.g. #include <agrpc/asioGrpc.hpp> becomes #include <agrpc/asio_grpc.hpp>.

    Breaking change: Two overloads for requesting unary RPCs have been removed, example: co_await agrpc::request(&example::v1::Example::Stub::AsyncUnary, stub, client_context, request). Please use other means of obtaining a reference to the agrpc::GrpcContext and then call agrpc::request(&example::v1::Example::Stub::AsyncUnary, stub, client_context, request, grpc_context).

    Breaking change: Unsafe overloads of agrpc::get_completion_queue have been removed.

    Breaking change: asio-grpcConfig.cmake will now autolink with gRPC::grpc++_unsecure instead of gRPC::grpc++. If you are using encrypted gRPC then you need to explicitly link with gRPC::grpc++ in your CMake files.

    Breaking change: If the completion handler passed to an asynchronous operation does not have an associated allocator then asio-grpc will no longer attempt to retrieve one from the associated executor through asio::query(executor, asio::execution::allocator). This behavior had been deprecated in v1.6.0.

    Features

    • Add run_until(deadline) to agrpc::GrpcContext.
    • Add PrepareAsync overloads for requesting RPCs in a thread-safe manner.
    • Deprecate the Async overloads for requesting client-side streaming RPCs as they can lead to race-conditions.
    • Add agrpc::BasicGrpcStream overload to agrpc::get_completion_queue.
    • Make agrpc::AllocatorBinder fully constexpr compatible.

    Style

    • No longer use std::aligned_storage since it is deprecated in C++23.
    • Continue to implement compile time improvements.

    Documentation

    • Add examples for multi-threaded clients and servers.
    • Make the ServerShutdown that is used in examples thread-safe.

    Chore

    • Asio-grpc is now available on conan-center!
    • Use CTest's --build-and-test command for the asio-grpc-cmake test. Also use an older version of CMake (3.16) for that test.
    • Remove GCC 9 and Clang 11 pipelines.
    • Update doctest, gRPC, gtest, libunifex and liburing in the pipelines.
    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Jun 6, 2022)

    Features

    • Add overloads to agrpc::request and agrpc::repeatedly_request for generic RPCs.
    • Add overload to agrpc::request for starting a client-side unary request. The last argument to this function is a agrpc::GrpcContext instead of a CompletionToken because the function completes immediately.
    • Add overloads to RPC functions for -Interface versions of stub/service/reader/writer/reader_writer/response_writer.
    • Add agrpc::GrpcContext.run_completion_queue()/poll_completion_queue() which only process events from the grpc::CompletionQueue. They can be used as a better performing alternative to run()/poll() where the GrpcContext is not used as an Asio execution context.
    • RPC functions now automatically unwrap std::unique_ptr. (Except stubs which will be addressed in the next release)
    • Stabilize API of agrpc::CancelSafe. Compared to the previous release it now takes a completion signature as template argument, example: Change agrpc::CancelSafe<bool> to agrpc::CancelSafe<void(bool)>
    • (experimental) Add agrpc::GrpcStream, an IoObject-like class for handling RPCs in a cancellation safe manner. See the new handle_topic_subscription example on how it can be used to write Golang/Rust select-style code.
    • (experimental) Replace agrpc::PollContext with a more optimized free function agrpc::run that can be used to run an asio::io_context and agrpc::GrpcContext in the same thread.
    • (experimental) Add utility function to manually process gRPC tags, useful for writing mocked stubs.

    Style

    • Use C++20 concepts when available to potentially improve compile times.
    • Implement several small compile time improvements.

    Documentation

    • Move doxygen documentation to the gh-pages branch.

    Chore

    • Add pipeline for gRPC 1.16.1.
    • Use CMake version 3.16.3 for the CMake install test.
    • Add CMake option to fallback to pkg-config when a dependency cannot be found. This option is for maintainers.
    • Add GTest to vcpkg dependencies.
    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Apr 27, 2022)

    Features

    • (experimental) Add agrpc::CancelSafe, a utility to make RPC step functions compatible with Asio's cancellation mechanism.
    • agrpc::PollContext now uses a configurable backoff to avoid wasting CPU resources during GrpcContext idle time.
    • agrpc::GrpcContext::run and poll now return a boolean indicating whether at least one operation has been processed.
    • Add .natvis file. It is automatically added to the interface sources of asio-grpc when using the Visual Studio generator. In VSCode it should be added manually to the "visualizerFile" field in launch.json.
    • Defining (BOOST_)ASIO_NO_TS_EXECUTORS now hides members functions of the agrpc::GrpcExecutor related to Networking TS executors: context(), on_work_started(), on_work_finished(), dispatch(), defer() and post().
    • Deprecated: If a completion handler does not have an associated allocator then asio-grpc retrieves one from the associated executor's allocator property. This behavior will be removed in v1.8.0. Please use agrpc::bind_allocator(allocator, completion_token) or asio::bind_allocator instead.

    Fixes

    • Defining (BOOST_)ASIO_HAS_DEDUCED_XXX_TRAIT macros is no longer required when compiling with MSVC in C++17.
    • Using polymorphic_allocator as the associated allocator of the completion handler passed to agrpc::repeatedly_request did not compile in C++20. Likewise, using std::allocator_traits<polymorphic_allocator>::construct would propagate the allocator to the constructor of agrpc::GrpcContext, agrpc::GrpcExecutor and agrpc::AllocatorBinder which was rather unexpected.
    • The comparison operator of agrpc::GrpcExecutor would not compile on older compilers if the allocator is not comparable, e.g. because it is std::allocator<void>.
    • Always return relationship::fork when querying a agrpc::GrpcExecutor for its relationship since that is the only supported setting. Preferring a different relationship property from the executor now returns the executor unchanged.

    Performance

    • The client-side request convenience overloads now also perform unbinding of associated characteristics to reduce memory allocation size.
    • Also unbind asio::allocator_binder when using Boost.Asio 1.79.0 or Asio 1.22.1.

    Style

    • Including agrpc/grpcContext.hpp now only provides forward declarations of the GrpcContext and its member functions. If you experience use of undefined function issues then add an include to agrpc/grpcExecutor.hpp.
    • No longer open the Asio namespace to specialize class templates.
    • Remove unused header <variant>.

    Documentation

    • Document Per-Operation Cancellation properties of all asynchronous functions.
    • Document differences in the behavior of asynchronous operations compared to Asio.
    • Make examples more readable by visually separating individual sub-examples.

    Chore

    • Asio-grpc is now available through the Hunter package manager!
    • Add GCC 8.4.0 to Github Actions.
    • Add CMake target to run -fsyntax-only//Zs on public header files and build that target as part of Github Actions.
    • Update Boost to 1.79.0.
    Source code(tar.gz)
    Source code(zip)
  • v1.5.1(Mar 29, 2022)

    Fixes

    • Incorrect work counting of agrpc::repeatedly_request with awaitable which could lead to an early stop of an agrpc::GrpcContext or crash during destruction.
    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Mar 28, 2022)

    Features

    • Add agrpc::bind_allocator which associates an allocator to a completion token's completion handler. Similar to the new asio::bind_allocator except that it also works in older versions of Asio and provides empty class optimization on the allocator.
    • Add agrpc::GrpcContext.poll() which processes only completed handler.
    • Adjust the behavior of a stopped agrpc::GrpcContext to mimic that of asio::io_context. Operations submitted to a stopped context will no longer be discarded and instead processed the next time run() or poll() is called.
    • (experimental) Add agrpc::PollContext which repeatedly polls a agrpc::GrpcContext within a different execution context. An example showing how this can be used to run an asio::io_context and agrpc::GrpcContext on the same thread has been added as well.

    Fixes

    • Certain headers did not work without also including other headers.

    Performance

    • Executor binder, allocator binder and cancellation slot binder are now unbound from completion handlers in RPC functions, thereby reducing the memory allocation size.
    • Avoid one dynamic memory allocation per request in agrpc::repeatedly_request with awaitable.

    Style

    • Remove deprecated agrpc::use_scheduler.

    Documentation

    • Add example that shows how to process a bidirectional stream by dispatching work to a thread_pool.
    • Add example that performs double-buffered, allocation-free file transfer using gRPC and io_uring.
    • Update doxygen-awesome to 2.0.2.

    Chore

    • Run msvc-code-analysis alongside CodeQL every week.
    • Update vcpkg and use manifest mode and CMake presets in github actions. Also compile dependencies in release only.
    • Add several /Zc flags when compiling with MSVC to make it more C++-standard compliant.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Feb 25, 2022)

    Features

    • Upcoming breaking change: agrpc::use_scheduler has been renamed to agrpc::use_sender. The old version will be removed in v1.5.0.
    • Stabilize agrpc::repeatedly_request's API. It now supports CancellationSlot/StopToken, asio::awaitable, Sender and a final CompletionToken. Also reduced the memory needed per request. See documentation and examples for more details.
    • Add agrpc::write_last which coalesces write and finish for server-side streaming RPCs and combines write and writes_done for client-side streaming RPCs.
    • Add StopToken support to agrpc::wait.
    • asio_grpc_protobuf_generate can now handle .proto files that are nested within the import directory.
    • Turn get_completion_queue into a function object.
    • Split public header files. They may now be included individually instead of only through the asioGrpc.hpp file.

    Fixes

    • agrpc::wait was incorrectly using the cancellation slot of the CompletionToken instead of the CompletionHandler, leading to broken cancellation propagation.

    Performance

    • Enable likely/unlikely attribute equivalents for C++17.

    Style

    • Issue a static assertion when asio-grpc is used without passing it to a target_link_libraries call in CMake which can lead to hard to decipher errors: https://github.com/Tradias/asio-grpc/issues/12.
    • Move .ipp headers out of the public include directory.
    • Mark RPC functions noexcept when agrpc::use_sender is the CompletionToken.

    Documentation

    • Complete API reference documentation is now available online and in code!
    • Provide an example on how to cancel individual RPC steps based on a deadline.
    • Provide documentation on how to use callbacks and stackless coroutines.
    • Fix compilation of the unifex server-streaming documentation snippets.
    • Mention that calls to find_package are needed in the As a subdirectory section of the README.
    • Update the Using vcpkg section of the README to reflect the automatic interface link library setup performed by asio-grpcConfig,cmake.

    Chore

    • Avoid duplicate compilation of generated protobuf files in tests.
    • Use libc++ for Ubuntu Clang builds.
    • Enable coroutines and unifex for Clang builds.
    • Use minimum supported standalone Asio version for CI and during development.
    • Use asio separate compilation in tests.
    • Update unifex to December release.
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Nov 12, 2021)

    Fixes

    • Add inline namespace so that libraries using different backends (Boost.Asio, standalone Asio or libunifex) can be linked together (as long as asio-grpc is not part of their public headers)

    Style

    • Move GrpcSender and ScheduleSender into the detail namespace
    • Remove template parameter from UseScheduler
    • Remove unused arguments in streaming-server.cpp example
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Nov 8, 2021)

    Features

    • Initial support for the Unified Executors proposal through libunifex and Asio
    • New special completion token created by agrpc::use_scheduler that causes an RPC step function to return a TypedSender
    • Support for standalone Asio
    • New targets: asio-grpc::asio-grpc-standalone-asio and asio-grpc::asio-grpc-unifex for the standalone Asio and libunifex based versions of this library respectively
    • The CMake package now finds and sets up linkage with dependencies. Can be disabled by setting ASIO_GRPC_DISABLE_AUTOLINK before the call to find_package(asio-grpc)
    • No longer depend on Boost.Intrusive and Boost.Lockfree

    Fixes

    • Add several missing agrpc::write with grpc::WriteOptions overloads

    Performance

    • Faster interaction with the GrpcContext from the thread that called .run() and even more so from other threads
    • Improved GrpcContext::run implementation

    Style

    • Avoid additional, identical instantiations of RPC initiating functions for different completion tokens
    • Turn several functions into niebloids

    Chore

    • Add several more examples
    • Add tests for examples
    • Reduce minimum required CMake version to 3.14 when only installing the project
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Oct 30, 2021)

    Features

    • Provide CMake function: asio_grpc_protobuf_generate.
    • Add work tracking to all operations. This makes the behavior of GrpcContext similar to boost::asio::io_context. E.g. instead of writing
    auto guard = boost::asio::make_work_guard(grpc_context);
    boost::asio::post(grpc_context,
                []
                {
                    guard.reset();
                });
    grpc_context.run();
    

    it is now sufficient to write:

    boost::asio::post(grpc_context, [] {});
    grpc_context.run();
    

    Fixes

    • A GrpcContext is now stopped after returning from run() if there were no outstanding operations when the call was made.
    • GrpcExecutor::operator== would fail to compare executors with different execution properties.

    Chore

    • Add CI pipelines for GCC 9.3.0, 11.1.0 and Clang 10.0.0, 11.0.0, 12.0.0.
    • Enable code coverage on sonarcloud.
    • Document how to shut down a gRPC server properly.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Oct 9, 2021)

  • v1.1.1(Oct 8, 2021)

    Fixes

    • Small breaking change: Remove the optional second argument from the agrpc::GrpcContext constructor which would incorrectly turn a completion handler with an associated std::allocator into a completion handler that uses the specified argument for allocations. If control over the allocator is needed then resort to Boost.Asio's standard mechanisms: Associate the allocator with the completion handler or boost::asio::require the allocator from the executor.
    • Bake the choice of ASIO_GRPC_USE_BOOST_CONTAINER into the installed header files to avoid accidental mixing of libraries compiled with and without it. This behavior is also required by vcpkg.

    Chore

    • Add CMake-based pre-commit hooks and CONTRIBUTING guidelines.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 1, 2021)

    Features

    • Use Boost.Container instead of <memory_resource> by setting the CMake variable ASIO_GRPC_USE_BOOST_CONTAINER
    • agrpc::wait can now be cancelled by associating a cancellation slot to the CompletionToken (Boost 1.77.0)
    • (experimental) New function agrpc::repeatedly_request. Can be used by servers to ensure that there are enough outstanding calls to request a new RPC. It takes a function object which should define how to handle an incoming RPC, it could e.g. be co_spawning a new coroutine to process it.

    Fixes

    • boost::asio::require(grpc_executor, boost::asio::execution::allocator) would not compile due to calling a constructor that is explicit

    Performance

    • Reduce memory allocation size of asynchronous operations that use the default allocator or where submitted from the thread that is calling agrpc::GrpcContext::run
    • Remove redundant if-condition when deallocating an asynchronous operation

    Style

    • Provide better error message when attempting to use the DefaultCompletionToken without coroutines being available
    • Remove [[nodiscard]] from agrpc::request functions

    Chore

    • Do not attempt to find_package or link with gRPC::grpc++ and Boost::headers when only installing the CMake project
    • Add Github build and test actions for MacOS-11 (AppleClang 12) and Windows-2022
    • Add Github action for automatically running MarkdownSnippets
    • Fix CMake unity build by excluding protobuf generated source files from it
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Sep 10, 2021)

  • v1.0.0(Aug 29, 2021)

Owner
Dennis
Dennis
Packio - An asynchronous msgpack-RPC and JSON-RPC library built on top of Boost.Asio.

Header-only | JSON-RPC | msgpack-RPC | asio | coroutines This library requires C++17 and is designed as an extension to boost.asio. It will let you bu

Quentin Chateau 58 Dec 26, 2022
HTTP and WebSocket built on Boost.Asio in C++11

HTTP and WebSocket built on Boost.Asio in C++11 Branch Linux/OSX Windows Coverage Documentation Matrix master develop Contents Introduction Appearance

Boost.org 3.6k Jan 4, 2023
Ole Christian Eidheim 741 Dec 27, 2022
Boost::ASIO low-level redis client (connector)

bredis Boost::ASIO low-level redis client (connector), github gitee Features header only zero-copy (currently only for received replies from Redis) lo

Ivan Baidakou 142 Dec 8, 2022
A very simple, fast, multithreaded, platform independent HTTP and HTTPS server and client library implemented using C++11 and Boost.Asio.

A very simple, fast, multithreaded, platform independent HTTP and HTTPS server and client library implemented using C++11 and Boost.Asio. Created to be an easy way to make REST resources available from C++ applications.

Ole Christian Eidheim 2.4k Dec 23, 2022
Boost.GIL - Generic Image Library | Requires C++11 since Boost 1.68

Documentation GitHub Actions AppVeyor Azure Pipelines CircleCI Regression Codecov Boost.GIL Introduction Documentation Requirements Branches Community

Boost.org 154 Nov 24, 2022
gRPC - An RPC library and framework Baind Unity 3D Project

Unity 3D Compose for Desktop and Android, a modern UI framework for C ++ , C# that makes building performant and beautiful user interfaces easy and enjoyable.

Md Raihan 4 May 19, 2022
C++ peer to peer library, built on the top of boost

Breep What is Breep? Breep is a c++ bridged peer to peer library. What does that mean? It means that even though the network is constructed as a peer

Lucas Lazare 110 Nov 24, 2022
requests-like networking library using boost for C++

cq == C++ Requests cq == C++ Requests is a "Python Requests"-like C++ header-only library for sending HTTP requests. The library is inspired a lot by

null 11 Dec 15, 2021
Lightweight, header-only, Boost-based socket pool library

Stream-client This is a lightweight, header-only, Boost-based library providing client-side network primitives to easily organize and implement data t

Tinkoff.ru 12 Aug 5, 2022
Boost headers

About This repository contains a set of header files from Boost. Can be useful when using header only libraries. How to use You can easily include the

null 2 Oct 16, 2021
Boost.org signals2 module

Signals2, part of collection of the Boost C++ Libraries, is an implementation of a managed signals and slots system. License Distributed under the Boo

Boost.org 52 Dec 1, 2022
Boost.org property_tree module

Maintainer This library is currently maintained by Richard Hodges with generous support from the C++ Alliance. Build Status Branch Status develop mast

Boost.org 36 Dec 6, 2022
Super-project for modularized Boost

Boost C++ Libraries The Boost project provides free peer-reviewed portable C++ source libraries. We emphasize libraries that work well with the C++ St

Boost.org 5.4k Jan 8, 2023
Level up your Beat Saber experience on Quest! AnyTweaks provides various tweaks to help boost your experience on Quest, such as Bloom, FPS Counter and more.

Need help/support? Ask in one of BSMG's support channels for Quest, or join my Discord server! AnyTweaks Level up your Beat Saber experience on Quest!

kaitlyn~ 19 Nov 20, 2022
The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.

Welcome! The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design

Microsoft 7.2k Dec 30, 2022
Corvusoft's Restbed framework brings asynchronous RESTful functionality to C++14 applications.

Restbed Restbed is a comprehensive and consistent programming model for building applications that require seamless and secure communication over HTTP

Corvusoft 1.7k Dec 29, 2022
Cross-platform, efficient, customizable, and robust asynchronous HTTP/WebSocket server C++14 library with the right balance between performance and ease of use

What Is RESTinio? RESTinio is a header-only C++14 library that gives you an embedded HTTP/Websocket server. It is based on standalone version of ASIO

Stiffstream 924 Jan 6, 2023
A C library for asynchronous DNS requests

c-ares This is c-ares, an asynchronous resolver library. It is intended for applications which need to perform DNS queries without blocking, or need t

c-ares 1.5k Jan 3, 2023