When it es to timing out HTTP requests, it looks like node.js has three separate timeouts:
- server.setTimeout .html#http_server_settimeout_msecs_callback
- request.setTimeout .html#http_request_settimeout_timeout_callback
- response.setTimeout .html#http_response_settimeout_msecs_callback
Can anyone clarify what the difference is between each of these methods and why someone would want to use each one?
When it es to timing out HTTP requests, it looks like node.js has three separate timeouts:
- server.setTimeout http://nodejs/api/http.html#http_server_settimeout_msecs_callback
- request.setTimeout http://nodejs/api/http.html#http_request_settimeout_timeout_callback
- response.setTimeout http://nodejs/api/http.html#http_response_settimeout_msecs_callback
Can anyone clarify what the difference is between each of these methods and why someone would want to use each one?
Share Improve this question edited Oct 21, 2014 at 21:44 Ben 52.9k36 gold badges132 silver badges154 bronze badges asked Oct 21, 2014 at 18:23 Kirk OuimetKirk Ouimet 28.4k44 gold badges130 silver badges182 bronze badges1 Answer
Reset to default 14- You are running a web server in your node.js app. This determines how long node will leave a client request connection open with no traffic before closing it due to idle timeout. An example is a user loses power in their house while downloading a large file from your app. You set this once and it will apply to all client connections your server receives.
- This is for outgoing requests from your node program to a remote web server. So you write a scraper to download a file and your Internet connection dies while downloading. This determines when node finally gives up on waiting for data from the remote side. This will only affect a specific request as the underlying TCP connection will be closed and each outgoing request will get a distinct TCP connection.
- Since the HTTP request and corresponding response occur via the same underlying TCP socket, my understanding is
req.setTimeout
andres.setTimeout
ultimately result in the same underlying system call that sets the timeout on the TCP socket itself using the corresponding libuv/os calls. So I think both are equivalent and you can you whichever one is more convenient or whichever one feels semantically clearer to you. I could be wrong about this though so if anyone else knows for certainly feel free to correct me.
Generally the defaults are reasonable. However, you might want to set these longer if you knew you had a lot of clients on very slow or flakey connections (you serve mobile phones in remote areas or satellites or whatever), and connections that were actually still viable we being closed due to timeout. You might want to set them shorter if you knew your clients were well-connected (like servers in the same datacenter), and you wanted to free up resources more aggressively.