The specification does not recognise the concept. Introduction to modern network load balancing and proxying, Matt KleinĪs far as the HTTP RFC is concerned, there is simply no such thing.Load Balancing with NGINX and NGINX Plus, Part 2 NGINX maintains a “cache” of keepalive connections – a set of idle keepalive connections to the upstream servers – and when it needs to forward a request to an upstream, it uses an already established keepalive connection from the cache rather than creating a new TCP connection. Meaning, each incoming connection has a corresponding outgoing connection to a backend, which removes our ability to reuse connections and leverage connection pooling. My current understanding is that L4 proxies maintain 1:1 connections with backends. However, if the proxy operates at L4, TLS termination must occur downstream (TLS pass-through), which poses complication. Then, the proxy queues requests and each request gets sent on an available TCP connection.įrom a TLS perspective, if the proxy is an HTTP proxy (L7), clients perform TLS handshakes with the proxy, not the backend web servers and, hence, there's no problem. The proxy opens a pool of persistent connections with each backend endpoint. This is called connection pooling and it is common in practice. You mean if two HTTP clients make requests to a reverse HTTP proxy, could the proxy reuse TCP connections to the HTTP server?