Daniel Hoelbling-Inzko talks about programming
Imagine configuring a HTTP Connection pool and setting setMaxTotal
to 50. Reasonable assumption would be that henceforth 50 concurrent connections will be made by the HttpClient upstream.
Well not in Java-land - here you'll get exactly 2 connections going out - apparently regardless of what you set as maximum total connections.
Turns out there is a second setting on the PoolingHttpClientConnectionManager
that's called maxPerRoute
and that controls how many connections you can make to the same host/url combination.
Since in our current setup we mostly query one endpoint over and over again the maxTotal
setting is pretty useless and the limiting factor will be the maxPerRoute
.
Thankfully there is a setDefaultMaxPerRoute
which can be tweaked, or there is the ability to specify individual limits per upstream route with setMaxPerRoute
The final code in question is:
PoolingHttpClientConnectionManager poolingConnectionManager = new PoolingHttpClientConnectionManager(); poolingConnectionManager.setMaxTotal(MAX_TOTAL_CONNECTIONS); poolingConnectionManager.setDefaultMaxPerRoute(MAX_TOTAL_CONNECTIONS);
To debug the issue of slow responding upstream clients I also wrote a little go webserver called blackhole
that does exactly what the name implies: It accepts any HTTP connection and swallows it for 100 seconds. This makes it easy to test your code against slow responding HTTP servers (like when under duress or if the system becomes unresponsive).