You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See the README.md file for instructions on reproduction steps.
Describe the bug
When handling streams it's important that backpressure and errors are properly handled as failure to do so can lead to resource leaks, crashes, and denial of service vulnerabilities that are trivially exploitable in your app.
Bad error handling in streamed responses
h3 promises a really simple interface for working with response streams, just return the stream and it'll work:
At first glance this appears to be just fine. But if the client aborts the request mid-way the premature close of the network stream is not handled by h3 and the file stream is kept open, which in turn means the file descriptor opened by the file stream is leaked.
Errors should also be handled when originating from opposite direction as well, streams don't always succeed at providing the data promised. If the hard drive returns an uncorrectable read error of the file being streamed, Node.js's file stream will raise an error event and destroy the file stream, but h3 does not handle this error and terminate the network stream. As a result the request will simply hang indefinitely in the middle of the stream.
The node:stream pipeline function correctly handles both of these cases, so if it is used instead like this:
Then a client aborting a request will close the file stream and release the file handle, and an error from the file stream will terminate the network stream.
h3 fails to propagate errors in both directions for both Node.js stream and the web standard ReadableStream interface.
No backpressure in Response streams
Another important aspect when handling stream is making sure that backpressure gets propagated from the consumer to the producer. When responding with a Node.js stream this works as it should, but when using a web standard ReadableStream h3 will read from it as fast as possible and buffer data that has yet to be sent to the client in memory. For example the following fast feeding infinite stream will crash the server when running with Node.js:
Once again, using the node:stream pipeline function to pass the stream over does not have this issue. It'll only buffer a small amount of data and pull from the stream when it needs more to send the client.
No backpressure in Request streams
To get a stream of the request body sent by the client h3 provides getRequestWebStream(event). This returns a ReadableStream that you can process or pipe into a writable stream. This also does not handle backpressure, and will buffer infinite data from the client should the client be sending it faster than the stream is consumed on the server.
Bad error handling in Request streams
When consuming readable streams it's common to close the stream when done with it. The pipeline helper does this, and for await iteration also does this by default. But the stream provided by getRequestWebStream will be closed when the request stream ends, and no error handling is done on this, which means this canonical async iteration of the read stream
router.post("/stream",defineEventHandler(event=>{forawait(constchunkofgetRequestWebStream(event)){throwError("oops something went wrong");// Leaving the async iteration here closes the stream by default.}}));
will cause Node.js to crash with this uncaught exception:
node:internal/webstreams/readablestream:1077
throw new ERR_INVALID_STATE.TypeError('Controller is already closed');
^
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
You can avoid this by iterating through getRequestWebStream(event).values({ preventCancel: true }}. Though using the pipeline function directly with the event.node.req stream does not have either of these problems.
Discussion
The handling of streams in h3 on Node.js is currently so bad that one is bettor off not trying to use any of the provided functionality. None of these surprising behaviours are documented, nor is any guidance provided on how to properly handle errors when dealing with streams, which leads to a bad developer experience using h3.
I ran into these issues when trying to make an EventSource stream work on Nuxt.js, and the more I dug into it the more problems I found.
I ran the same test code against the current main branch for h3 v2, and all the tests cases except for node client.mjs get node client-error were still exhibiting the same bad behaviour. In addition to that the event._handled = true workaround to prevent h3 from touching the response no longer works. This will break real code in used the wild (see the linked gist in the previous post) and therefore should absolutely be documented in the migration guide.
Given that the v2 branch's stream handling is just as broken as v1, you now have to do esoteric workarounds to even be able to correctly handle response streams. For example, this works correctly in h3 v2:
defineEventHandler(event=>{conststream=createReadStream("path/to/file");pipeline(stream,event.node.res,(err)=>{if(err)console.error(err)});return{pipe: ()=>newPromise((resolve,reject)=>{// ignore pipe to trick h3 into not touching the response stream})}})
Correctly handling errors and backpressure in response streams should not be this hard!
Hornwitser
changed the title
Stream are poorly handled on Node.js
Streams are poorly handled on Node.js
Mar 5, 2025
Environment
Node.js v22.14.0
h3 v1.15.1
Reproduction
Repository: https://github.com/Hornwitser/h3-stream-problems
See the README.md file for instructions on reproduction steps.
Describe the bug
When handling streams it's important that backpressure and errors are properly handled as failure to do so can lead to resource leaks, crashes, and denial of service vulnerabilities that are trivially exploitable in your app.
Bad error handling in streamed responses
h3 promises a really simple interface for working with response streams, just return the stream and it'll work:
At first glance this appears to be just fine. But if the client aborts the request mid-way the premature close of the network stream is not handled by h3 and the file stream is kept open, which in turn means the file descriptor opened by the file stream is leaked.
Errors should also be handled when originating from opposite direction as well, streams don't always succeed at providing the data promised. If the hard drive returns an uncorrectable read error of the file being streamed, Node.js's file stream will raise an error event and destroy the file stream, but h3 does not handle this error and terminate the network stream. As a result the request will simply hang indefinitely in the middle of the stream.
The
node:stream
pipeline function correctly handles both of these cases, so if it is used instead like this:Then a client aborting a request will close the file stream and release the file handle, and an error from the file stream will terminate the network stream.
h3 fails to propagate errors in both directions for both Node.js stream and the web standard
ReadableStream
interface.No backpressure in Response streams
Another important aspect when handling stream is making sure that backpressure gets propagated from the consumer to the producer. When responding with a Node.js stream this works as it should, but when using a web standard
ReadableStream
h3 will read from it as fast as possible and buffer data that has yet to be sent to the client in memory. For example the following fast feeding infinite stream will crash the server when running with Node.js:Once again, using the
node:stream
pipeline function to pass the stream over does not have this issue. It'll only buffer a small amount of data and pull from the stream when it needs more to send the client.No backpressure in Request streams
To get a stream of the request body sent by the client h3 provides
getRequestWebStream(event)
. This returns aReadableStream
that you can process or pipe into a writable stream. This also does not handle backpressure, and will buffer infinite data from the client should the client be sending it faster than the stream is consumed on the server.Bad error handling in Request streams
When consuming readable streams it's common to close the stream when done with it. The pipeline helper does this, and
for await
iteration also does this by default. But the stream provided bygetRequestWebStream
will be closed when the request stream ends, and no error handling is done on this, which means this canonical async iteration of the read streamwill cause Node.js to crash with this uncaught exception:
You can avoid this by iterating through
getRequestWebStream(event).values({ preventCancel: true }}
. Though using the pipeline function directly with theevent.node.req
stream does not have either of these problems.Discussion
The handling of streams in h3 on Node.js is currently so bad that one is bettor off not trying to use any of the provided functionality. None of these surprising behaviours are documented, nor is any guidance provided on how to properly handle errors when dealing with streams, which leads to a bad developer experience using h3.
I ran into these issues when trying to make an EventSource stream work on Nuxt.js, and the more I dug into it the more problems I found.
Additional context
See also somewhat related discussions at https://gist.github.com/Atinux/05836469acca9649fa2b9e865df898a2 and nitrojs/nitro#2118
Logs
The text was updated successfully, but these errors were encountered: