multithreading - Efficient use of gRPC Async API with Multiple Channels and a unique Completion Queue in C++ - Stack Overflow

admin2025-04-16  4

Currently I'm building some microservices for AWS. The choice of my team was using Drogon for the server-side functionality, and gRPC to consume other microservices.

The containers have 4 logical threads and 2 physicals. So my plan was to use one thread to listen/send HTTP requests, 2 worker threads for processing/hard tasks, and one thread for gRPC.

Currently an orchestrator can consume 5-10 microservices living in different hosts, like localhost:444, localhost:445, ... so in the orchestrator I think would need to have N channels and N stubs (because I can't share channels cause of the hosts).

Using the Async API, I have created one Completion Queue for all the clients, along with a thread that will constantly wait for the next result:

auto AsyncRpcEngine::start_completion_queue_loop_thread() -> void
{
    std::thread{&AsyncRpcEngine::completion_queue_loop, this}.detach();
}

auto AsyncRpcEngine::retrieve_completion_queue() -> grpc::CompletionQueue&
{
    return completion_queue_;
}

auto AsyncRpcEngine::completion_queue_loop() -> void
{
    void* function_tag;
    bool ok = false;

    while (completion_queue_.Next(&function_tag, &ok))
    {
        auto function_call = static_cast< std::function<void ()>* >(function_tag);

        (*function_call)();

        delete function_call;
    }
}

So I have a few questions:

  1. Is my approach of using one Completion Queue for N stubs/clients a good solution? I don't expect to execute and excessive amount of gRPC calls, but I would say that in a second I should execute 20-100 calls.
  2. Is there going to be a big overhead for creating multiple channels? I didn't find much info about how channels works, but my hope is for them to not create some internal threads and clog the system.
  3. I chose the Async API because it let me have more control over the threads/async operations, and I already have some functions to make some callbacks and awaitables for the gRPC tasks, but if I'm not contemplating an optimization or better approach to my problem, please let me know how can I make my program more efficient.

Currently I'm building some microservices for AWS. The choice of my team was using Drogon for the server-side functionality, and gRPC to consume other microservices.

The containers have 4 logical threads and 2 physicals. So my plan was to use one thread to listen/send HTTP requests, 2 worker threads for processing/hard tasks, and one thread for gRPC.

Currently an orchestrator can consume 5-10 microservices living in different hosts, like localhost:444, localhost:445, ... so in the orchestrator I think would need to have N channels and N stubs (because I can't share channels cause of the hosts).

Using the Async API, I have created one Completion Queue for all the clients, along with a thread that will constantly wait for the next result:

auto AsyncRpcEngine::start_completion_queue_loop_thread() -> void
{
    std::thread{&AsyncRpcEngine::completion_queue_loop, this}.detach();
}

auto AsyncRpcEngine::retrieve_completion_queue() -> grpc::CompletionQueue&
{
    return completion_queue_;
}

auto AsyncRpcEngine::completion_queue_loop() -> void
{
    void* function_tag;
    bool ok = false;

    while (completion_queue_.Next(&function_tag, &ok))
    {
        auto function_call = static_cast< std::function<void ()>* >(function_tag);

        (*function_call)();

        delete function_call;
    }
}

So I have a few questions:

  1. Is my approach of using one Completion Queue for N stubs/clients a good solution? I don't expect to execute and excessive amount of gRPC calls, but I would say that in a second I should execute 20-100 calls.
  2. Is there going to be a big overhead for creating multiple channels? I didn't find much info about how channels works, but my hope is for them to not create some internal threads and clog the system.
  3. I chose the Async API because it let me have more control over the threads/async operations, and I already have some functions to make some callbacks and awaitables for the gRPC tasks, but if I'm not contemplating an optimization or better approach to my problem, please let me know how can I make my program more efficient.
Share Improve this question asked Feb 4 at 4:16 GerardoBelicGerardoBelic 451 silver badge3 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 1

In case you haven't yet, give this a read to begin with https://grpc.io/docs/guides/performance/

We recommend using the callback API for all new development. We are quickly approaching the point where it will become the most performant option. That said, you will not have control of the number of threads that gRPC utilizes unless you create a custom EventEngine. gRPC will create its own threads and scale a thread pool to optimize gRPC performance - and it will automatically scale back the number of threads if performance decreases.

There is some overhead with multiple channels, but there is also a possibility that TCP connections are shared between channels (known as "subchannel connection sharing"). Performance also depends on your unique workload in your polling threads. I'd recommend you start with the performance guidelines above, and if you stick with managing your own threads in the async API, you will likely want to play with these parameters and see what performs best for your application. Note that performance for the Async API will change in future versions, as we make the Callback API the most performant option.

转载请注明原文地址:http://anycun.com/QandA/1744740668a86950.html