I need to call a celery task for each GRPC request, and return the result.In default GRPC implementation, each request is processed in a separate thread from a threadpool.In my case, the server is supposed to process ~400 requests in batch mode per second. So one request may have to wait 1 second for the result due to the batch processing, which means the size of the threadpool must be larger than 400 to avoid blocking.Can this be done asynchronously?Thanks a lot.class EventReporting(ss_pb2.BetaEventReportingServicer, ss_pb2.BetaDeviceMgtServicer): def ReportEvent(self, request, context): res = tasks.add.delay(1,2) result = res.get() ->here i have to block return ss_pb2.GeneralReply(message='Hello, %s!' % result.message) 解决方案 It can be done asynchronously if your call to res.get can be done asynchronously (if it is defined with the async keyword).While grpc.server says it requires a futures.ThreadPoolExecutor, it will actually work with any futures.Executor that calls the behaviors submitted to it on some thread other than the one on which they were passed. Were you to pass to grpc.server a futures.Executor implemented by you that only used one thread to carry out four hundred (or more) concurrent calls to EventReporting.ReportEvent, your server should avoid the kind of blocking that you describe. 这篇关于如何实现异步grpc python服务器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!