modern C++(C++11), simple, easy to use rpc framework

Overview

rest_rpc

Build Status

c++11, high performance, cross platform, easy to use rpc framework.

It's so easy to love RPC.

Modern C++开发的RPC库就是这么简单好用!

rest_rpc简介

rest_rpc是一个高性能、易用、跨平台、header only的c++11 rpc库,它的目标是让tcp通信变得非常简单易用,即使不懂网络通信的人也可以直接使用它。 可以快速上手,使用者只需要关注自己的业务逻辑即可。

谁在用rest_rpc

  1. 博世汽车
  2. 浙江智网科技
  3. purecpp.org

在这里增加用户

rest_rpc的特点

rest_rpc为用户提供了非常简单易用的接口,几行代码就可以实现rpc通信了,来看第一个例子

一个加法的rpc服务

//服务端注册加法rpc服务

struct dummy{
	int add(rpc_conn conn, int a, int b) { return a + b; }
};

int main(){
	rpc_server server(9000, std::thread::hardware_concurrency());

	dummy d;
	server.register_handler("add", &dummy::add, &d);
	
	server.run();
}
("add", 1, 2); client.run(); } ">
//客户端调用加法的rpc服务
int main(){
	rpc_client client("127.0.0.1", 9000);
	client.connect();

	int result = client.call
   
    ("add", 1, 2);

	client.run();
}

   

获取一个对象的rpc服务

//服务端注册获取person的rpc服务

//1.先定义person对象
struct person {
	int id;
	std::string name;
	int age;

	MSGPACK_DEFINE(id, name, age);
};

//2.提供并服务
person get_person(rpc_conn conn) {
	return { 1, "tom", 20 };
}

int main(){
	//...
	server.register_handler("get_person", get_person);
}
("get_person"); std::cout << result.name << std::endl; client.run(); } ">
//客户端调用获取person对象的rpc服务
int main(){
	rpc_client client("127.0.0.1", 9000);
	client.connect();
	
	person result = client.call
   
    ("get_person");
	std::cout << result.name << std::endl;
	
	client.run();
}

   

异步?

同步?

future?

callback?

当初为了提供什么样的接口在社区群里还争论了一番,有人希望提供callback接口,有人希望提供future接口,最后我 决定都提供,专治强迫症患者:)

现在想要的这些接口都给你提供了,你想用什么类型的接口就用什么类型的接口,够酷吧,让我们来看看怎么用这些接口吧:

//服务端提供echo服务
std::string echo(rpc_conn conn, const std::string& src) {
	return src;
}

server.register_handler("echo", echo);

客户端同步接口

auto result = client.call
   
    ("echo", "hello");

   

客户端异步回调接口

(data); std::cout << "echo " << str << '\n'; }); ">
client.async_call("echo", [](boost::system::error_code ec, string_view data){
	auto str = as
   
    (data);
	std::cout << "echo " << str << '\n';
});

   

async_call接口说明

有两个重载的async_call接口,一个是返回future的接口,一个是带超时的异步接口。

返回future的async_call接口:

std::future
   
     future = client.async_call
    
     ("echo", "purecpp");

    
   

带超时的异步回调接口:

async_call
   
    ("some_rpc_service_name", callback, service_args...);

   

如果不显式设置超时时间的话,则会用默认的5s超时.

async_call("some_rpc_service_name", callback, args...);
(data); std::cout << result << " async\n"; }, "purecpp"); ">
client.async_call("echo", [](boost::system::error_code ec, string_view data) {
    if (ec) {                
        std::cout << ec.message() <<" "<< data << "\n";
        return;
    }

    auto result = as
   
    (data);
    std::cout << result << " async\n";
}, "purecpp");

   

客户端异步future接口

(); std::cout << "echo " << str << '\n'; } ">
auto future = client->async_call
   
    ("echo", "hello");
auto status = future.wait_for(std::chrono::seconds(2));
if (status == std::future_status::timeout) {
	std::cout << "timeout\n";
}
else if (status == std::future_status::ready) {
	auto str = future.get().as
    
     ();
	std::cout << "echo " << str << '\n';
}

    
   

除了上面的这些很棒的接口之外,更酷的是rest_rpc还支持了订阅发布的功能,这是目前很多rpc库做不到的。

服务端订阅发布的例子在这里:

https://github.com/qicosmos/rest_rpc/blob/master/examples/server/main.cpp#L121 https://github.com/qicosmos/rest_rpc/blob/master/examples/client/main.cpp#L383

rest_rpc是目前最快的rpc库,具体和grpc和brpc做了性能对比测试,rest_rpc性能是最高的,远超grpc。

性能测试的结果在这里:

https://github.com/qicosmos/rest_rpc/blob/master/doc/%E5%8D%95%E6%9C%BA%E4%B8%8Arest_rpc%E5%92%8Cbrpc%E6%80%A7%E8%83%BD%E6%B5%8B%E8%AF%95.md

rest_rpc的更多用法

可以参考rest_rpc的example:

https://github.com/qicosmos/rest_rpc/tree/master/examples

future

make an IDL tool to genrate the client code.

Comments
  • client connect()会神秘的返回true (server端就根本没启动)

    client connect()会神秘的返回true (server端就根本没启动)

    libtest.cpp如下

    #include "thread" #include "iostream" #include "chrono" #include "rpc_client.hpp"

    using namespace rest_rpc; using namespace rest_rpc::rpc_service;

    bool online {};

    bool init() { std::thread check_thread([] () { rpc_client check_client("127.0.0.1", 3000);

    	while (true) {
    		auto current_status = check_client.connect();
    		check_client.close();
    		std::cout << "check server ..." << std::endl;
    
    		if (current_status == online) {
    			// do nothing
    		} else {
    			online = current_status;
    			if (!current_status) {
    				std::cout << "server is offline" << std::endl;
    			} else {
    				std::cout << "server is online ..." << std::endl;
    			}				
    		}
    
    		std::this_thread::sleep_for(std::chrono::seconds(5));
    	}
    });
    check_thread.detach();
    
    return true;
    

    }

    build libtest.cpp to shared library (.so file)

    g++ -fPIC -shared -o libtest.so libtest.cpp -I./rest_rpc/include -I./rest_rpc/third/msgpack/include -Ipthread

    test-1.cpp如下

    #include "thread" #include "iostream"

    bool init();

    int main() { init();

    while (true) {
    	std::this_thread::sleep_for(std::chrono::seconds(5));
    }
    
    // ...	
    std::cout << __LINE__ << std::endl;
    	
    return 0;
    

    }

    链接libtest.so生成可执行文件

    g++ -o test-1 test-1.cpp -std=c++11 -I./rest_rpc/include -I./rest_rpc/third/msgpack/include -lpthread -ltest -L.

    下面是运行test-1的片段

    $ ./test-1 check server ... check server ... check server ... check server ... server is online ... check server ... server is offline check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... server is online ... check server ... server is offline check server ... check server ... check server ... server is online ... check server ... server is offline check server ... check server ... check server ... server is online ... check server ... server is offline check server ... check server ... check server ... server is online ... check server ... server is offline check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... check server ... server is online ... check server ... server is offline check server ...

    这里server (port 3000)就根本没有启动,但test-1在运行时会时不时的变现connect()会返回true,而且概率还蛮高的。

    opened by wzhou1974 5
  • 怎么配合cinatra使用异步接口?

    怎么配合cinatra使用异步接口?

    最近在使用cinatra和rest_rpc,我目前以cinatra为主,如果我理解的没问题的话,cinatra的回调里调用rest_rpc同步接口,会阻塞一个线程,性能上有损耗,如果使用异步接口,cinatra的回调会快速返回并发送response,也存在问题。 而且两个库各自有一套io_service的话,目前我的做法是交替执行poll_one,这个是推荐的做法吗?

    opened by Kidsunbo 3
  • 异步回调接口无法捕捉server退出的异常

    异步回调接口无法捕捉server退出的异常

    rest_rpc里的异步回调接口,在用的时候server被关闭,使用示例代码设置超时的方式不能捕捉到这个错误

    client:

    void test_callback_add() {
    	rpc_client client;
      	bool r = client.connect("127.0.0.1", 9123);
    	int cnt = 0;
    	for (int i = 0; i < 1000; i++) {	
    		client.async_call<500>("add", [&](asio::error_code ec, string_view data) {
    			if (ec) {       
    				cnt++;         
    				std::cout << ec.message() << "\n";
    				return;
    			}
    			auto result = as<int>(data);
    			std::cout << result << "\n";
    			cnt++;
    		}, i, i);
    	}
    	while(cnt != 1000) {}
    	client.stop();
    }
    
    opened by Cai-Yao 2
  • Cannot compile http_epoll.cpp

    Cannot compile http_epoll.cpp

    ➜  examples++ git:(master) ✗ ls
    a.out                  echo_client_conn.cpp   framing.cpp  http_epoll.cpp  server.cpp              unix_dgram_client.cpp        unix_server_stream.cpp
    client.cpp             echo_client_sndto.cpp  http_2.cpp   libsocket++.a   test.sh                 unix_dgram_server.cpp
    dgram_over_stream.cpp  echo_server.cpp        http.cpp     libsocket++.so  unix_client_stream.cpp  unix_dgram_syslogclient.cpp
    ➜  examples++ git:(master) ✗ g++ http_epoll.cpp -std=c++17 -lsocket++
    In file included from http_epoll.cpp:3:
    ../headers/epoll.hpp: In destructor ‘libsocket::epollset<SocketT>::~epollset()’:
    ../headers/epoll.hpp:127:5: error: there are no arguments to ‘close’ that depend on a template parameter, so a declaration of ‘close’ must be available [-fpermissive]
      127 |     close(epollfd);
          |     ^~~~~
    ../headers/epoll.hpp:127:5: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)
    http_epoll.cpp: In function ‘int main()’:
    http_epoll.cpp:35:36: error: use of deleted function ‘libsocket::inet_stream& libsocket::inet_stream::operator=(const libsocket::inet_stream&)’
       35 |             sock = *(ready.first[0]);
          |                                    ^
    In file included from http_epoll.cpp:5:
    ../headers/inetclientstream.hpp:54:7: note: ‘libsocket::inet_stream& libsocket::inet_stream::operator=(const libsocket::inet_stream&)’ is implicitly deleted because the default definition would be ill-formed:
       54 | class inet_stream : public inet_socket, public stream_client_socket {
          |       ^~~~~~~~~~~
    ../headers/inetclientstream.hpp:54:7: error: use of deleted function ‘libsocket::inet_socket& libsocket::inet_socket::operator=(const libsocket::inet_socket&)’
    In file included from ../headers/inetclientstream.hpp:8,
                     from http_epoll.cpp:5:
    ../headers/inetbase.hpp:52:7: note: ‘libsocket::inet_socket& libsocket::inet_socket::operator=(const libsocket::inet_socket&)’ is implicitly deleted because the default definition would be ill-formed:
       52 | class inet_socket : public virtual socket {
          |       ^~~~~~~~~~~
    ../headers/inetbase.hpp:52:7: error: use of deleted function ‘constexpr libsocket::socket& libsocket::socket::operator=(const libsocket::socket&)’
    In file included from ../headers/epoll.hpp:45,
                     from http_epoll.cpp:3:
    ../headers/socket.hpp:71:7: note: ‘constexpr libsocket::socket& libsocket::socket::operator=(const libsocket::socket&)’ is implicitly declared as deleted because ‘libsocket::socket’ declares a move constructor or move assignment operator
       71 | class socket {
          |       ^~~~~~
    In file included from http_epoll.cpp:5:
    ../headers/inetclientstream.hpp:54:7: error: use of deleted function ‘libsocket::stream_client_socket& libsocket::stream_client_socket::operator=(const libsocket::stream_client_socket&)’
       54 | class inet_stream : public inet_socket, public stream_client_socket {
          |       ^~~~~~~~~~~
    In file included from ../headers/inetclientstream.hpp:9,
                     from http_epoll.cpp:5:
    ../headers/streamclient.hpp:52:7: note: ‘libsocket::stream_client_socket& libsocket::stream_client_socket::operator=(const libsocket::stream_client_socket&)’ is implicitly declared as deleted because ‘libsocket::stream_client_socket’ declares a move constructor or move assignment operator
       52 | class stream_client_socket : public virtual socket {
          |       ^~~~~~~~~~~~~~~~~~~~
    In file included from http_epoll.cpp:3:
    ../headers/epoll.hpp: In instantiation of ‘libsocket::epollset<SocketT>::~epollset() [with SocketT = libsocket::inet_stream]’:
    http_epoll.cpp:23:31:   required from here
    ../headers/epoll.hpp:127:10: error: ‘close’ was not declared in this scope; did you mean ‘pclose’?
      127 |     close(epollfd);
          |     ~~~~~^~~~~~~~~
          |     pclose
    ➜  examples++ git:(master) ✗ 
    
    opened by nqf 2
  • 运行时出现 Operation aborted

    运行时出现 Operation aborted

    client

    using namespace rest_rpc;
    using namespace rest_rpc::rpc_service;
    rpc_client client("127.0.0.1", 9123);
    void test_add() {
    	try {
    		
    		bool r = client.connect();
    		if (!r) {
    			std::cout << "connect timeout" << std::endl;
    			return;
    		}
    		{
    			auto result = client.call<int>("add", 2, 3);
    			printf("result: %d\n", result);
    		}
    	}
    	catch (const std::exception & e) {
    		std::cout << e.what() << std::endl;
    	}
    }
    
    int main()
    {
    	long long start_time = getCurrentPreciseTime();
    	test_add();
    	long long end_time = getCurrentPreciseTime();
    	printf("read cost %lld\n", end_time - start_time);
    	return 0;
    }
    

    server

    struct dummy{
    	int add(rpc_conn conn, int a, int b) {
    		auto shared_conn = conn.lock();
    		// if (shared_conn) {
    		// 	shared_conn->set_user_data(std::string("aa"));
    		// 	auto s = conn.lock()->get_user_data<std::string>();
    		// 	std::cout << s << '\n'; //aa
    		// }
    		int res = 0;
    		for (int i = 0; i < 100000; ++i) {
    			res = (res + i + b + a) % 100000;
    		}
    		return a + b;
    
    
    	}
    };
    
    int main() {
        rpc_server server(9123, std::thread::hardware_concurrency());
    
    	dummy d;
    	server.register_handler("add", &dummy::add, &d);
    	
    	server.run();
    	
    	return 0;
    

    result

    result: 5
    Operation aborted.
    

    请问这个报错应该怎么消除

    opened by Cai-Yao 1
  • core dumped: rpc超时发生core dump

    core dumped: rpc超时发生core dump

    描述:在服务端的rpc中使用sleep模拟rpc处理时间比较长。client的超时时间小于服务器的执行时间。本地稳定复现coredump

    cli.out: include/rest_rpc/rpc_client.hpp:618: void rest_rpc::rpc_client::call_back(uint64_t, const error_code&, nonstd::sv_lite::string_view): Assertion `f' failed.
    Aborted (core dumped)
    

    server.cpp

    #include <rest_rpc.hpp>
    using namespace rest_rpc;
    using namespace rpc_service;
    #include <fstream>
    #include <iostream>
    #include <chrono>
    #include <thread>
     
    #include "qps.h"
    
    void hello(rpc_conn conn, const std::string &str) {
      static thread_local int remote_read_count = 0;
      using namespace std::chrono_literals;
      std::this_thread::sleep_for(5000ms);
      remote_read_count++;
      std::cout << "remote_read_count = " << remote_read_count << std::endl;
    }
    
    int main() {
      //  benchmark_test();
      std::cout << "std::thread::hardware_concurrency() = " << std::thread::hardware_concurrency() << std::endl;
      rpc_server server(9000, std::thread::hardware_concurrency());
    
      server.register_handler("hello", hello);
      server.set_network_err_callback(
          [](std::shared_ptr<connection> conn, std::string reason) {
            std::cout << "remote client address: " << conn->remote_address()
                      << " networking error, reason: " << reason << "\n";
          });
    
      bool stop = false;
      server.run();
      stop = true;
    }
    

    client

    #include <chrono>
    #include <fstream>
    #include <iostream>
    #include <rest_rpc.hpp>
    
    using namespace rest_rpc;
    using namespace rest_rpc::rpc_service;
    
    void test_connect() {
      rpc_client client;
      client.enable_auto_reconnect(); // automatic reconnect
      client.enable_auto_heartbeat(); // automatic heartbeat
      bool r = client.connect("127.0.0.1", 9000);
    
      int count = 0;
      while (true) {
        if (client.has_connected()) {
          std::cout << "connected ok\n";
          try {
            client.call<3000>("hello", "purecpp");
          } catch (const std::exception &e) {
            std::cout << e.what() << std::endl;
          }
        } else {
          std::cout << "connected failed: " << count++ << "\n";
        }
        std::this_thread::sleep_for(std::chrono::seconds(1));
      }
    
    }
    
    
    int main() {
      test_connect();
      return 0;
    }
    
    opened by wangqiim 1
  • 错误:样例程序,注册类成员函数,返回结构型变量时,编译出错!

    错误:样例程序,注册类成员函数,返回结构型变量时,编译出错!

    开发环境:VS2017 + boost_1_75_0 , Win10专业版

    原样例文档:(修改部分)

    struct person { int id; std::string name; int age;

    MSGPACK_DEFINE(id, name, age);
    

    };

    //测试名称空间 namespace tns {

       //原样例类,增加测试函数test
       struct dummy {
    
    	int add(rpc_conn conn, int a, int b) {
    		return a + b;
    	}
    
    	//================================
    	//测试名称空间下的类,返回结构数据
    	person test(int a) {
    		person p;
    		return p;
    	}
    	//================================
    };
    

    }

    //调用部分 rpc_server server(9000, std::thread::hardware_concurrency());

    tns::dummy *d=new tns::dummy();

    //server.register_handler("add", &tns::dummy::add, d); /返回非结构变量,正常/

    //===================================================== //下例注册函数编译提示: //error C2672: “rest_rpc::rpc_service::router::call_member”: 未找到匹配的重载函数 //===================================================== server.register_handler("test",&tns::dummy::test, d); //=====================================================

    详细出错内容: 1>d:\rest_rpc\rest_rpc\router.h(151): error C2672: “rest_rpc::rpc_service::router::call_member”: 未找到匹配的重载函数 1>d:\rest_rpc\rest_rpc\router.h(175): note: 参见对正在编译的函数 模板 实例化“void rest_rpc::rpc_service::router::invoker<Function,rest_rpc::ExecMode::sync>::apply_member<rest_rpc::ExecMode::sync,Self>(const Function &,Self *,std::weak_ptr<rest_rpc::rpc_service::connection>,const char *,size_t,std::string &,rest_rpc::ExecMode &)”的引用 1> with 1> [ 1> Function=person (__thiscall tns::dummy::* )(int), 1> Self=tns::dummy 1> ] 1>d:\rest_rpc\rest_rpc\router.h(172): note: 参见对正在编译的函数 模板 实例化“void rest_rpc::rpc_service::router::invoker<Function,rest_rpc::ExecMode::sync>::apply_member<rest_rpc::ExecMode::sync,Self>(const Function &,Self *,std::weak_ptr<rest_rpc::rpc_service::connection>,const char *,size_t,std::string &,rest_rpc::ExecMode &)”的引用 1> with 1> [ 1> Function=person (__thiscall tns::dummy::* )(int), 1> Self=tns::dummy 1> ] 1>d:\rest_rpc\rest_rpc\router.h(25): note: 参见对正在编译的函数 模板 实例化“void rest_rpc::rpc_service::router::register_member_func<rest_rpc::ExecMode::sync,Function,Self>(const std::string &,const Function &,Self *)”的引用 1> with 1> [ 1> Function=person (__thiscall tns::dummy::* )(int), 1> Self=tns::dummy 1> ] 1>d:\rest_rpc\rest_rpc\rpc_server.h(86): note: 参见对正在编译的函数 模板 实例化“void rest_rpc::rpc_service::router::register_handler<rest_rpc::ExecMode::sync,Function,Self>(const std::string &,const Function &,Self )”的引用 1> with 1> [ 1> Function=person (__thiscall tns::dummy::* )(int), 1> Self=tns::dummy 1> ] 1>d:\rest_rpc\examples\server\main.cpp(124): note: 参见对正在编译的函数 模板 实例化“void rest_rpc::rpc_service::rpc_server::register_handler<rest_rpc::ExecMode::sync,person(__thiscall tns::dummy:: )(int),tns::dummy>(const std::string &,const Function &,Self *)”的引用 1> with 1> [ 1> Function=person (__thiscall tns::dummy::* )(int), 1> Self=tns::dummy 1> ] 1>d:\boost_1_75_0\boost\asio\use_future.hpp(139): note: 参见对正在编译的 类 模板 实例化 "boost::asio::use_future_t<std::allocator>::std_allocator_void" 的引用 1>d:\boost_1_75_0\boost\asio\use_future.hpp(147): note: 参见对正在编译的 类 模板 实例化 "boost::asio::use_future_t<std::allocator>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\relationship.hpp(268): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::relationship_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\relationship.hpp(309): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::relationship_t" 的引用 1>d:\boost_1_75_0\boost\asio\execution\outstanding_work.hpp(269): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::outstanding_work_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\outstanding_work.hpp(311): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::outstanding_work_t" 的引用 1>d:\boost_1_75_0\boost\asio\execution\occupancy.hpp(123): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::occupancy_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\mapping.hpp(315): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::mapping_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\mapping.hpp(377): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::mapping_t" 的引用 1>d:\boost_1_75_0\boost\asio\execution\context.hpp(130): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::context_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\bulk_guarantee.hpp(321): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::bulk_guarantee_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\bulk_guarantee.hpp(385): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::bulk_guarantee_t" 的引用 1>d:\boost_1_75_0\boost\asio\execution\blocking_adaptation.hpp(276): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::blocking_adaptation_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\blocking_adaptation.hpp(318): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::blocking_adaptation_t" 的引用 1>d:\boost_1_75_0\boost\asio\execution\blocking.hpp(331): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::blocking_t<0>" 的引用 1>d:\boost_1_75_0\boost\asio\execution\blocking.hpp(393): note: 参见对正在编译的 类 模板 实例化 "boost::asio::execution::detail::blocking_t" 的引用 1>d:\rest_rpc\rest_rpc\router.h(151): error C2893: 未能使函数模板“std::enable_if<std::is_void<std::result_of<F(Self,std::weak_ptr<rest_rpc::rpc_service::connection>,Args...)>::type>::value,void>::type rest_rpc::rpc_service::router::call_member(const F &,Self *,std::weak_ptr<rest_rpc::rpc_service::connection>,std::string &,std::tuple<Arg,Args...>)”专用化 1>d:\rest_rpc\rest_rpc\router.h(151): note: 用下列模板参数: 1>d:\rest_rpc\rest_rpc\router.h(151): note: “F=Function” 1>d:\rest_rpc\rest_rpc\router.h(151): note: “Self=Self” 1>d:\rest_rpc\rest_rpc\router.h(151): note: “Arg=std::basic_string<char,std::char_traits,std::allocator>” 1>d:\rest_rpc\rest_rpc\router.h(151): note: “Args={}” 1>已完成生成项目“basic_server.vcxproj”的操作 - 失败。

    opened by sdyhrj 1
  • Bump commons-io from 2.5 to 2.7 in /java

    Bump commons-io from 2.5 to 2.7 in /java

    Bumps commons-io from 2.5 to 2.7.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Fix join an unjoinable thread in rpc client.

    Fix join an unjoinable thread in rpc client.

    The thd_ in rpc_client.hpp will be joined twice, which might introduce the following exception:

    libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error:
          recursive_mutex lock failed: Invalid argument
    
    opened by jovany-wang 0
  • 简单的rest_rpc出错

    简单的rest_rpc出错

    最简单的rest_rpc demo server

    #include <stdio.h>
    #include <iostream>
    #include <fstream>
    #include <string>
    
    #include <rest_rpc.hpp>
    
    using namespace rest_rpc;
    using namespace rpc_service;
    
    void hello(rpc_conn conn, const std::string &str) {
      std::cout << "hello " << str << std::endl;
    }
    
    int main(int argc, char *argv[]) {
    
        rpc_server server(9000, std::thread::hardware_concurrency());
    
        server.register_handler("hello", hello);
        server.set_network_err_callback(
            [](std::shared_ptr<connection> conn, std::string reason) {
                std::cout << "remote client address: " << conn->remote_address()
                        << " networking error, reason: " << reason << "\n";
            });
    
        server.run();
    
        return 0;
    }
    

    client

    #include <chrono>
    #include <fstream>
    #include <iostream>
    #include <rest_rpc.hpp>
    
    using namespace rest_rpc;
    using namespace rest_rpc::rpc_service;
    
    void test_hello() {
        try {
            rpc_client client("127.0.0.1", 9000);
            bool r = client.connect();
            if (!r) {
                std::cout << "connect timeout" << std::endl;
                return;
            }
            client.call<2000, std::string>("hello", "rest_rpc");
        } catch (const std::exception &e) {
            std::cout << "Err." << e.what() << std::endl;
        }
    }
    
    void test_connect() {
        rpc_client client;
        client.enable_auto_reconnect(); // automatic reconnect
        client.enable_auto_heartbeat(); // automatic heartbeat
        bool r = client.connect("127.0.0.1", 9000);
        int count = 0;
        while (true) {
            if (client.has_connected()) {
                std::cout << "connected ok\n";
                break;
            }
            else {
                std::cout << "connected failed: " << count++ << "\n";
            }
            std::this_thread::sleep_for(std::chrono::seconds(1));
        }
    }
    
    int main(int argc, char *argv[]) {
        test_connect();
        test_hello();
    
        return 0;
    }
    

    报错信息如下: server

    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    hello rest_rpc
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    hello rest_rpc
    remote client address: 127.0.0.1 networking error, reason: End of file
    remote client address: 127.0.0.1 networking error, reason: End of file
    

    client

    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    End of file
    Segmentation fault
    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    End of file
    Segmentation fault
    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    End of file
    Segmentation fault
    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    Operation aborted.
    basic_client: tpp.c:84: __pthread_tpp_change_priority: Assertion `new_prio == -1 || (new_prio >= fifo_min_prio && new_prio <= fifo_max_prio)' failed.
    Aborted
    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    Operation aborted.
    basic_client: ../nptl/pthread_mutex_lock.c:433: __pthread_mutex_lock_full: Assertion `INTERNAL_SYSCALL_ERRNO (e, __err) != ESRCH || !robust' failed.
    Aborted
    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    Operation aborted.
    Operation aborted.
    [email protected]:~/application/rest_rpc_master/demo/build$ ./basic_client 
    connected ok
    End of file
    Operation aborted.
    [email protected]:~/application/rest_rpc_master/demo/build$
    

    当我吧asio更新至最新后,段错误便不会出现了,但是依然有Operation aborted的错误

    opened by summerlotus513 1
  • 多线程测试下,性能波动大

    多线程测试下,性能波动大

    client

    #include <rest_rpc.hpp>
    #include <sys/time.h>
    #include <vector>
    #include <thread>
    #include <chrono>
    static long long getCurrentPreciseTime() {
        struct timeval tv;  
        gettimeofday(&tv,NULL); 
        return (long long)tv.tv_sec * 1000 * 1000 + tv.tv_usec;
    }
    
    using namespace rest_rpc;
    using namespace rest_rpc::rpc_service;
    void test_add() {
    	rpc_client client("127.0.0.1", 9123);
    	{
    		std::vector<char> req;
    		req.resize(128 * 100);
    		for (int i = 0; i < 10; i++) {
    			bool r = client.connect();
    			if (!r) {
    				std::cout << "connect timeout" << std::endl;
    				return;
    			}
    			
    			auto result = client.call<std::tuple<int32_t, std::vector<char>, int32_t>>("tuple", i, i, req);
    		}
    		
    	}
    	// client.close();
    	client.stop();
    }
    
    int main()
    {
    	std::vector<std::thread> threads;
    	long long start_time = getCurrentPreciseTime();
    	for(int i = 0; i < 150; i++) {
    		threads.push_back(std::thread(test_add));
    	}
    	for(int i = 0; i < 150; i++) {
    		threads[i].join();
    	}
    	long long end_time = getCurrentPreciseTime();
    	printf("read cost %lld\n", end_time - start_time);
    	return 0;
    }
    

    server

    #include "rest_rpc.hpp"
    #include <string>
    using namespace rest_rpc;
    using namespace rpc_service;
    
    struct dummy{
    	std::tuple<int32_t, std::vector<char>, int32_t> rpc_test_tuple(rpc_conn conn, int32_t x, int32_t y, std::vector<char> req) {
    		std::vector<char> res(8);
    		auto tuple = std::make_tuple(x, res, y);
    
    		return tuple;
    	}
    };
    
    int main() {
        rpc_server server(9123, std::thread::hardware_concurrency() * 3);
    
    	dummy d;
    	//server.register_handler("add", &dummy::add, &d);
    	server.register_handler("tuple", &dummy::rpc_test_tuple, &d);
    	server.run();
    	
    	return 0;
    }
    

    result

    $ ./client 
    read cost 48257
    $ ./client 
    read cost 50969
    $ ./client 
    read cost 3021651
    $ ./client 
    read cost 3020548
    $ ./client 
    read cost 3030853
    $ ./client 
    read cost 3034473
    $ ./client 
    read cost 3031834
    $ ./client 
    read cost 3039597
    $ ./client 
    read cost 48282
    $ ./client 
    read cost 51913
    $ ./client 
    read cost 3038084
    

    请问这是什么原因,是因为哪里的细节没有注意到吗

    opened by Cai-Yao 3
  • 干掉server后闪退问题

    干掉server后闪退问题

    大佬,我有个问题,使用简单的client端代码,和example中的server.c进行测试,如果在cs链接后,kill -9 干掉server,然后client端会崩溃

    client端代码如下:

    int main() {  
      rpc_client client("127.0.0.1", 9000);  
      bool r = client.connect();  
      if (!r) {  
        std::cout << "connect timeout" << std::endl;  
        //return;  
      }  
      while(1){  
          {  
            auto result = client.call<int>("add", 1, 2);  
            std::cout << result << std::endl;  
          }  
      
          {  
            auto result = client.call<2000, int>("add", 1, 2);  
            std::cout << result << std::endl;  
          }  
      
          ::usleep(100000);  
      }  
      return 0;  
    }
    

    server端用的是example中的server.c, clinet处于长链接状态,问下怎么解决这个client崩溃问题

    opened by Teacher-May 1
Releases(V0.11)
Owner
qicosmos
purecpp.org 微信公账号purecpp
qicosmos
RPC++ is a tool for Discord RPC (Rich Presence) to let your friends know about your Linux system

RPC++ RPC++ is a tool for Discord RPC (Rich Presence) to let your friends know about your Linux system Installing requirements Arch based systems pacm

grialion 4 Jul 6, 2022
BingBing 60 Nov 4, 2022
modern c++(c++17), cross-platform, header-only, easy to use http framework

cinatra--一个高效易用的c++ http框架 English | 中文 目录 使用cinatra常见问题汇总(FAQ) cinatra简介 如何使用 快速示例 性能测试 注意事项 roadmap 联系方式 cinatra简介 cinatra是一个高性能易用的http框架,它是用modern

qicosmos 1.4k Nov 28, 2022
C++ framework for json-rpc (json remote procedure call)

I am currently working on a new C++17 implementation -> json-rpc-cxx. Master Develop | libjson-rpc-cpp This framework provides cross platform JSON-RPC

Peter Spiess-Knafl 827 Nov 30, 2022
gRPC - An RPC library and framework Baind Unity 3D Project

Unity 3D Compose for Desktop and Android, a modern UI framework for C ++ , C# that makes building performant and beautiful user interfaces easy and enjoyable.

Md Raihan 4 May 19, 2022
Header-only, event based, tiny and easy to use libuv wrapper in modern C++ - now available as also shared/static library!

Do you have a question that doesn't require you to open an issue? Join the gitter channel. If you use uvw and you want to say thanks or support the pr

Michele Caini 1.5k Nov 28, 2022
🚀 Discord RPC Blocker for Lunar Client

?? Soyuz Soyuz has one simple purpose; listen for incoming Discord RPC requests from Lunar Client and block them! Limitations Windows only Soon to com

Fuwn 8 Oct 6, 2022
RPC based on C++ Workflow. Supports Baidu bRPC, Tencent tRPC, thrift protocols.

中文版入口 SRPC Introduction SRPC is an RPC system developed by Sogou. Its main features include: Base on Sogou C++ Workflow, with the following features:

C++ Workflow Project and Ecosystem 1.5k Dec 1, 2022
Gromox - Groupware server backend with MAPI/HTTP, RPC/HTTP, IMAP, POP3 and PHP-MAPI support for grommunio

Gromox is the central groupware server component of grommunio. It is capable of serving as a replacement for Microsoft Exchange and compatibles. Conne

grommunio 130 Nov 26, 2022
Fastest RPC in the west

smf - the fastest RPC in the West We're looking for a new maintainer for the SMF project. As I have little time to keep up with issues. Please let me

null 646 Dec 1, 2022
Apache Thrift is a lightweight, language-independent software stack for point-to-point RPC implementation

Apache Thrift Introduction Thrift is a lightweight, language-independent software stack for point-to-point RPC implementation. Thrift provides clean a

The Apache Software Foundation 9.5k Nov 26, 2022
A collection of C++ HTTP libraries including an easy to use HTTP server.

Proxygen: Facebook's C++ HTTP Libraries This project comprises the core C++ HTTP abstractions used at Facebook. Internally, it is used as the basis fo

Facebook 7.7k Nov 25, 2022
An easy to use and powerful open source websocket library written in C.

libwebsock Easy to use C library for websockets This library allows for quick and easy development of applications that use the websocket protocol, wi

Jonathan Hall 47 Nov 13, 2022
Easy-to-use HTTP C library

LibHTTP LibHTTP is a C library easy-to-use which implements the base of the HTTP protocol. Info's about LibHTTP LibHTTP is an easy-to-use HTTP library

null 6 Dec 10, 2021
Easy to use client modifications for old Roblox

RbxHooks Easy to use client modifications for old Roblox Hooks These describe the included hooks with RbxHooks, but you can also add your own! RbxHook

ORC Free and Open Source Software 20 Oct 25, 2022
A Fast and Easy to use microframework for the web.

A Fast and Easy to use microframework for the web. Description Crow is a C++ microframework for running web services. It uses routing similar to Pytho

Crow 1.3k Dec 2, 2022
A high-performance and easy-to-use C++ network library.

pine A high-performance and easy-to-use C++ network library. Now this is just a toy library for education purpose, do not use in production. example A

Baroquer 73 Nov 30, 2022
🌱Light and powerful C++ web framework for highly scalable and resource-efficient web application. It's zero-dependency and easy-portable.

Oat++ News Hey, meet the new oatpp version 1.2.5! See the changelog for details. Check out the new oatpp ORM - read more here. Oat++ is a modern Web F

Oat++ 5.9k Nov 25, 2022