I'm using Apache HttpClient (version 5.4.x) with PoolingHttpClientConnectionManager, which pools connections on a per-route basis. According to the documentation, each route has its own pool of connections.
However, I’m concerned that a surge of slow, long-running requests could tie up enough connections to delay or block the faster requests.
My question is: Does the per-route connection pooling fully isolate these two types of requests, or is there a risk of one affecting the other under heavy load?
I'm using Apache HttpClient (version 5.4.x) with PoolingHttpClientConnectionManager, which pools connections on a per-route basis. According to the documentation, each route has its own pool of connections.
However, I’m concerned that a surge of slow, long-running requests could tie up enough connections to delay or block the faster requests.
My question is: Does the per-route connection pooling fully isolate these two types of requests, or is there a risk of one affecting the other under heavy load?
Share Improve this question edited Feb 5 at 20:45 Max Alex asked Feb 5 at 13:48 Max AlexMax Alex 112 bronze badges 1- @ChinHuang, thanks for the link to the related question. However, it’s not quite what I was asking. I’ve edited my question, and I hope it’s clearer now. – Max Alex Commented Feb 5 at 20:48
1 Answer
Reset to default 0HttpClient manages persistent connections in the pool based on their routes, not their performance characteristics. If a particular route has a higher priority than others one may allow that route to allocate more concurrent connections but this has nothing to do with performance of individual message exchanges. Connections for that specific route will always be allocated regardless of other routes as long as the total max number of connections has been reached