本文介绍了转到gRPC客户端连接范围和池的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

从Go gRPC代码库中考虑示例:

Considering the example from the Go gRPC code base:

func main() {
    // Set up a connection to the server.
    conn, err := grpc.Dial(address, grpc.WithInsecure())
    if err != nil {
        log.Fatalf("did not connect: %v", err)
    }
    defer conn.Close()
    c := pb.NewGreeterClient(conn)

    // Contact the server and print out its response.
    name := defaultName
    if len(os.Args) > 1 {
        name = os.Args[1]
    }
    r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name})
    if err != nil {
        log.Fatalf("could not greet: %v", err)
    }
    log.Printf("Greeting: %s", r.Message)
}

从其他服务使用gRPC服务时,连接范围(conn)应该是什么?我认为它应该与消费者服务所处理的请求范围具有关联性,但是我尚未找到有关此文档的任何文档.我应该在这里使用连接池吗?

When consuming a gRPC service from another service what should the scope of the connection (conn) be? I assume it should have affinity with the scope of the request being handled be the consumer service, but I have yet to find any documentation around this. Should I be using a connection pool here?

E.G.

  1. gRPC消费者服务收到请求
  2. (直接或通过池)建立与gRPC服务的连接
  3. 向gRPC服务发出 n 个请求
  4. 关闭gRPC连接(或释放回池)
  1. gRPC consumer service receives request
  2. establish connection to gRPC service (either directly or via pool)
  3. make n requests to gRPC service
  4. close gRPC connection (or release back to the pool)

推荐答案

根据经验,gRPC客户端连接应在客户端应用程序的生命周期内重新使用,因为它们可以安全地并发使用.此外,gRPC的关键功能之一是来自远程过程调用的快速响应,如果必须在收到的每个请求上都重新连接,则无法实现.

From experience, gRPC client connections should be re-used for the lifetime of the client application as they are safe for concurrent use. Furthermore, one of the key features of gRPC is rapid response from remote procedural calls, which would not be achieved if you have to reconnect on every request received.

尽管如此,还是强烈建议将gRPC负载平衡与这些持久连接一起使用.否则,很多负载可能会落在几个长期存在的grpc客户端-服务器连接上.负载平衡选项包括:

Nonetheless, it is highly recommended to use some kind of gRPC load balancing along with these persistent connections. Otherwise, a lot of the load may end up on a few long-lived grpc client-server connections. Load Balancing options include:

  1. 客户端上的gRPC连接池与服务器端TCP(第4层)负载均衡器组合在一起.这将首先创建一个客户端连接池,然后将该连接池重新用于后续的gRPC请求.我认为这是更容易实施的途径.请参阅对gRPC连接进行缓冲,以获取使用grpc客户端的grpc连接池示例. grpc-go-pool 库.
  2. 具有gRPC支持的HTTP/2(第7层)负载均衡器,用于负载均衡请求.请参阅 gRPC负载平衡,其中概述了不同的grpc负载平衡选项. nginx最近添加了对gRPC负载平衡的支持.
  1. A gRPC connection pool on client side combined with a server side TCP (Layer 4) load balancer. This will create a pool of client connections initially, and re-use this pool of connections for subsequent gRPC requests. This is the easier route to implement in my opinion. See Pooling gRPC Connections for an example of grpc connection pooling on grpc client side which uses the grpc-go-pool library.
  2. HTTP/2(Layer 7) load balancer with gRPC support for load balancing requests. See gRPC Load Balancing which gives an overview of different grpc load balancing options. nginx recently added support for gRPC load balancing.

这篇关于转到gRPC客户端连接范围和池的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 05:26
查看更多