-
-
Notifications
You must be signed in to change notification settings - Fork 33.1k
http: optimize IncomingMessage._dump #59747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Review requested:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
This comment was marked as outdated.
This comment was marked as outdated.
d3ff5f6
to
e86d740
Compare
Failed to start CI⚠ Commits were pushed since the last approving review: ⚠ - http: optimize IncomingMessage._dump ✘ Refusing to run CI on potentially unsafe PRhttps://github.com/nodejs/node/actions/runs/17445334596 |
197ba2c
to
60e12ac
Compare
@RafaelGSS why did you close? |
I was a mistake, sorry @ronag |
This comment was marked as outdated.
This comment was marked as outdated.
doc/api/http.md
Outdated
@@ -3632,6 +3635,10 @@ changes: | |||
* `rejectNonStandardBodyWrites` {boolean} If set to `true`, an error is thrown | |||
when writing to an HTTP response which does not have a body. | |||
**Default:** `false`. | |||
* `dumpGETBody` {boolean} If set to `true`, request body for `HEAD` and `GET` | |||
requests will be immediatly dumped and the `end` and `close` events for | |||
`IncomingMessage` will be emitted before the server emits `'request'`. This |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that this is behind an option but I find it odd. It breaks the invariant where 'end'
/ 'close'
are always emitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a valid point. I don't think it's anywhere documented that it must emit the life cycle events. Though it is indeed breaking. It is totally valid from the streams api since you can receive an already closed stream. It will work with stream.finished
and stream.pipeline
.
d9d8fbc
to
fc3f493
Compare
@mcollina @RafaelGSS benchmark CI is not meaningful, I added my own autocannon benchmark #59747 (comment) const http = require('http');
http.createServer((req, res) => {
res.end();
}).listen(9000);
|
Can we implement so that if there is someone watching for ‘end’ or ‘close’, it follows the usual path, while if they are not, we can avoid them? Alternatively, we could have _dump actually emit them. |
With fastify:
|
Benchmark result on my tests: So, tuns out my computer was too fast and it was delivering the same amount of req/sec regardless of binary. In other words, I was being limited by wrk2 itself. Now I have configured proper values and the latency (which is the most important metric in terms of wrk2) has shown the following distribution:
node v24.5.0
And autocannon reported: node v24.5.0
versus
|
req._readableState.endEmitted = true; | ||
req._readableState.destroyed = true; | ||
req._readableState.closed = true; | ||
req._readableState.closeEmitted = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe move this stream internal handling into some function exported by the stream implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_dumped
is specific to http.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I refer to all the req._readableState.*
assignments which are not http specific to my knowledge. The look more like stream internals.
It seems the user facing option is gone now and this behavior is no opt in anymore. The repeated forced-pushes make this PR a bit hard to follow. I think the PR headline/description should be improved a bit. Currently a reader might not expect that this optimization of an internal function is actually a breaking change. |
Changed to draft. |
Co-authored-by: Gürgün Dayıoğlu <[email protected]>
Indeed - that's definitely a problem imo, and this behaviour has to be optional (at least an option to disable it, but probably a opt-in option for a smoother transition). There will definitely be plenty of cases in production where users do actually use GET bodies even though they shouldn't (we even have examples in our own tests, which have been fixed here) and there's real-world scenarios (like the ElasticSearch API) where supporting bodies with GET is required. Happy to discourage users from doing this, but we do have to support it for people who need it. |
Just for context, even for Fastify we get a lot of people wanting to have a body in GET requests unfortunately |
One option is to check the content-length header and only apply this optimization if it's not 0? |
For
Notes for folks on these results: The tooling runs while capturing perf events and with the inspector protocol on, it is more designed for debugging performance than benchmarking but it should be pretty reliable for doing comparisons over small patches like this.
Same on Express. Although I would love to make this default and have an option to allow |
We'd also need to check for |
PRE:
POST: