In the example code in this article, how is the last segment of the stream working on the line:
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
I understand that the first part reading the file, the second is compressing it, but what is .pipe(res)
? which seems to do the job I'd usually do with res.send
or res.sendFile
.
Full code†:
const accepts = require('accepts')
const brotli = require('iltorb')pressStream
function onRequest (req, res) {
res.setHeader('Content-Type', 'text/html')
const fileName = req.params.fileName
const filePath = path.resolve(__dirname, 'files', fileName)
const encodings = new Set(accepts(req).encodings())
if (encodings.has('br')) {
res.setHeader('Content-Encoding', 'br')
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
}
}
const app = express()
app.use('/files/:fileName', onRequest)
localhost:5000/files/test.txt => Browser displays text contents of that file
How does simply piping the data to the response object render the data back to the client?
† which I changed slightly to use express, and a few other minor stuff.
In the example code in this article, how is the last segment of the stream working on the line:
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
I understand that the first part reading the file, the second is compressing it, but what is .pipe(res)
? which seems to do the job I'd usually do with res.send
or res.sendFile
.
Full code†:
const accepts = require('accepts')
const brotli = require('iltorb').compressStream
function onRequest (req, res) {
res.setHeader('Content-Type', 'text/html')
const fileName = req.params.fileName
const filePath = path.resolve(__dirname, 'files', fileName)
const encodings = new Set(accepts(req).encodings())
if (encodings.has('br')) {
res.setHeader('Content-Encoding', 'br')
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
}
}
const app = express()
app.use('/files/:fileName', onRequest)
localhost:5000/files/test.txt => Browser displays text contents of that file
How does simply piping the data to the response object render the data back to the client?
† which I changed slightly to use express, and a few other minor stuff.
Share Improve this question asked Dec 31, 2017 at 1:06 12527481252748 15.4k34 gold badges115 silver badges241 bronze badges 2 |4 Answers
Reset to default 19 +25"How does simply piping the data to the response object render the data back to the client?"
The wording of "the response object" in the question could mean the asker is trying to understand why piping data from a stream to res
does anything. The misconception is that res
is just some object.
This is because all express Responses (res
) inherit from http.ServerResponse
(on this line), which is a writable Stream
. Thus, whenever data is written to res
, the written data is handled by http.ServerResponse
which internally sends the written data back to the client.
Internally, res.send
actually just writes to the underlying stream it represents (itself). res.sendFile
actually pipes the data read from the file to itself.
In case the act of "piping" data from one stream to another is unclear, see the section at the bottom.
If, instead, the flow of data from file to client isn't clear to the asker, then here's a separate explanation.
I'd say the first step to understanding this line is to break it up into smaller, more understandable fragments:
First, fs.createReadStream
is used to get a readable stream of a file's contents.
const fileStream = fs.createReadStream(filePath);
Next, a transform stream that transforms data into a compressed format is created and the data in the fileStream
is "piped" (passed) into it.
const compressionStream = brotli();
fileStream.pipe(compressionStream);
Finally, the data that passes through the compressionStream
(the transform stream) is piped into the response, which is also a writable stream.
compressionStream.pipe(res);
The process is quite simple when laid out visually:
Following the flow of data is now quite simple: the data first comes from a file, through a compressor, and finally to the response, which internally sends the data back to the client.
Wait, but how does the compression stream pipe into the response stream?
The answer is that pipe
returns the destination stream. That means when you do a.pipe(b)
, you'll get b
back from the method call.
Take the line a.pipe(b).pipe(c)
for example. First, a.pipe(b)
is evaluated, returning b
. Then, .pipe(c)
is called on the result of a.pipe(b)
, which is b
, thus being equivalent to b.pipe(c)
.
a.pipe(b).pipe(c);
// is the same as
a.pipe(b); // returns `b`
b.pipe(c);
// is the same as
(a.pipe(b)).pipe(c);
The wording "imply piping the data to the response object" in the question could also entail the asker doesn't understand the flow of the data, thinking that the data goes directly from a
to c
. Instead, the above should clarify that the data goes from a
to b
, then b
to c
; fileStream
to compressionStream
, then compressionStream
to res
.
A Code Analogy
If the whole process still makes no sense, it might be beneficial to rewrite the process without the concept of streams:
First, the data is read from the file.
const fileContents = fs.readFileSync(filePath);
The fileContents
are then compressed. This is done using some compress
function.
function compress(data) {
// ...
}
const compressedData = compress(fileContents);
Finally, the data is sent back to the client through the response res
.
res.send(compressedData);
The original line of code in the question and the above process are more or less the same, barring the inclusion of streams in the original.
The act of taking some data in from an outside source (fs.readFileSync
) is like a readable Stream
. The act of transforming the data (compress
) via a function is like a transform Stream
. The act of sending the data to an outside source (res.send
) is like a writable Stream
.
"Streams are Confusing"
If you're confused about how streams work, here's a simple analogy: each type of stream can be thought of in the context of water (data) flowing down the side of a mountain from a lake on the top.
- Readable streams are like the lake on the top, the source of the water (data).
- Writable streams are like people or plants at the bottom of the mountain, consuming the water (data).
- Duplex streams are just streams that are both Readable and Writable. They're be akin to a facility at the bottom that takes in water and puts out some type of product (i.e. purified water, carbonated water, etc.).
- Transform streams are also Duplex streams. They're like rocks or trees on the side of the mountain, forcing the water (data) to take a different path to get to the bottom.
A convenient way of writing all data read from a readable stream directly to a writable stream is to just pipe
it, which is just directly connecting the lake to the people.
readable.pipe(writable); // easy & simple
This is in contrast to reading data from the readable stream, then manually writing it to the writable stream:
// "pipe" data from a `readable` stream to a `writable` one.
readable.on('data', (chunk) => {
writable.write(chunk);
});
readable.on('end', () => writable.end());
You might immediately question why Transform streams are the same as Duplex streams. The only difference between the two is how they're implemented.
Transform streams implement a _transform
function that's supposed to take in written data and return readable data, whereas a Duplex stream is simply both a Readable and Writable stream, thus having to implement _read
and _write
.
I'm not sure if I understand your question correctly. But I'll attempt to explain the code fs.createReadStream(filePath).pipe(brotli()).pipe(res)
which might clarify your doubt, hopefully.
If you check the source code of iltorb
, compressStream
returns an object of TransformStreamEncode
which extends Transform
. As you can see Transform streams implement both the Readable and Writable interfaces. So when fs.createReadStream(filePath).pipe(brotli())
is getting executed, TransformStreamEncode
's writable interface is used to write the data read from filePath
. Now when the next call to .pipe(res)
is getting executed, readable interface of TransformStreamEncode
is used to read the compressed data and it is passed to res
. If you check the documentation of HTTP Response object it implements the Writable interface. So it internally handles the pipe
event to read the compressed data from Readable TransformStreamEncode
and then sends it to client.
HTH.
You ask:
How does simply piping the data to the response object render the data back to the client?
Most people understand "render X" as "produce some visual representation of X". Sending the data to the browser (here, through piping) is a necessary step prior to rendering on the browser the file that is read from the file system, but piping is not what does the rendering. What happens is that the Express app takes the content of the file, compresses it and sends the compressed stream as-is to the browser. This is a necessary step because the browser cannot render anything if it does not have the data. So .pipe
is only used to pass the data to the response sent to the browser.
By itself, this does not "render", nor tell the browser what to do with the data. Before the piping, this happens: res.setHeader('Content-Type', 'text/html')
. So the browser will see a header telling it that the content is HTML. Browsers know what to do with HTML: display it. So it will take the data it gets, decompress it (because the Content-Encoding
header tells it it is compressed), interpret it as HTML, and show it to the user, that is, render it.
what is
.pipe(res)
? which seems to do the job I'd usually do withres.send
orres.sendFile
.
.pipe
is used to pass the entire content of a readable stream to a writable stream. It is a convenience method when handling streams. Using .pipe
to send a response makes sense when you must read from a stream to get the data you want to include in the response. If you do not have to read from a stream, you should use .send
or .sendFile
. They perform nice bookkeeping tasks like setting the Content-Length
header, that otherwise you'd have to do yourself.
In fact, the example you show is doing a poor attempt at performing content negotiation. That code should be rewritten to use res.sendFile
to send the file to the browser, and the handling of compression should be done by a middleware designed for content negotiation, because there's much more to it than only supporting the br
scheme.
read this to obtain answer : Node.js Streams: Everything you need to know
I'll quote interressant part :
a.pipe(b).pipe(c).pipe(d)
# Which is equivalent to:
a.pipe(b)
b.pipe(c)
c.pipe(d)
# Which, in Linux, is equivalent to:
$ a | b | c | d
so fs.createReadStream(filePath).pipe(brotli()).pipe(res)
is equivalent to var readableStream = fs.createReadStream(filePath).pipe(brotli());readableStream .pipe(res)
and
# readable.pipe(writable)
readable.on('data', (chunk) => {
writable.write(chunk);
});
readable.on('end', () => {
writable.end();
});
so Node.js read the file and convert it to readble stream object fs.createReadStream(filePath)
.
Then it give to iltorb library that create another readble stream .pipe(brotli())
(containing the compressed content) and finally pass the content to res
which is writable stream. So nodejs call internally res.write()
that write back to the browser.
res
is also stream. So you can pipe to it. Don't forget callres.destroy()
on read error. – Aikon Mogwai Commented Dec 31, 2017 at 1:56