-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
poll_* methods to support custom futures implementations #78
base: main
Are you sure you want to change the base?
Conversation
I'll implement the |
btw, if you implement Stream + Sink from futures you can call |
Cargo.toml
Outdated
|
||
[features] | ||
default = ["simd"] | ||
simd = ["simdutf8/aarch64_neon"] | ||
upgrade = ["hyper", "pin-project", "base64", "sha1", "hyper-util", "http-body-util"] | ||
unstable-split = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intentional to remove unstable-split
? I would like to keep it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I can put it back in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, the question is, why is it unstable?
Would love to see this land. This library would become a great alternative to |
FWIW, it'd be nice to have the same |
This is true, but internally it means it'll simply create two references to the same thing using a Having a "true" split implementation that is lock free would be a lot cooler! :) Also, while it's impossible to write an implementation of Just spelling this out for anyone reading this ✌️ |
… some buffered data
How is this not lock free? If you use tokio::io::split it will use a Mutex internally.
Yes, that's what I do mostly. Because I want my code to be compatible with futures::Stream and Sink. I shouldn't but I had to code it in a bit of a rush 👀 |
Yeah no I'm agreeing with you and that's exactly what I'm trying to say. |
Ah yes, but it is mainly for compatibility because sometimes it might be useful to use futures::StreamExt. Anyways, the user can implement it on their own, given that |
@matheus23 about poll in |
Personally, I'd highly prefer returning the mandatory frame :) |
Indeed. Me too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR and sorry for slow replies, I hope you don't mind me taking some time here as this is a relatively big change :)
It seems there is a regression in echo_server
benchmark with larger payloads:
# this pr
$ ./load_test 10 0.0.0.0 8080 0 0 102400
Msg/sec: 35997.750000
Msg/sec: 35021.000000
# main
$ ./load_test 10 0.0.0.0 8080 0 0 102400
Msg/sec: 42045.750000
Msg/sec: 42146.500000
(measured on an M1 macbook; similar numbers on x64 Linux server)
Just as an FYI, I've been trying to get the benchmarks to compile & run here on my NixOS to help diagnose the regression, but I'm still fighting my way through with the linker. Will of course post once I got something. |
I managed to reproduce the benchmarks, not in macos in Linux, but I cannot find where the regression is |
Co-authored-by: Conrad Ludgate <[email protected]>
src/lib.rs
Outdated
@@ -197,7 +259,7 @@ pub(crate) struct WriteHalf { | |||
vectored: bool, | |||
auto_apply_mask: bool, | |||
writev_threshold: usize, | |||
write_buffer: Vec<u8>, | |||
buffer: BytesMut, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Vec is faster in my testing. To replicate buffer.advance(written)
you can do buffer.splice(..written, [0u8; 0])
. Because this is expensive what I found is better is to instead have a buf_pos: usize
that I increment, and only in start_send
do I run
self.buffer.splice(..self.buf_pos, [0u8; 0]);
self.buf_pos = 0;
before the fmt_head call to write the frame into the buffer. This is because BytesMut::advance
is kinda expensive it seems.
Further testing shows that, for large buffers, removing the vectored write support is a significant source of the regressions. Now the difference it just 43500 Msg/sec compared to 44000 Msg/sec Opened a PR against this PR 😅 dgrr#1 |
re-introduce vectored writes to the poll-based impl
@littledivy it seems that this PR is ok now thanks to the contributions of @conradludgate |
@littledivy All checks have passed successfully 🚀. Could you please review and merge this PR at your convenience 🙏. Thank you @dgrr and @conradludgate for your kind contributions in making this library even better. 🫡 |
) ## Description Before we used to depend on both tungstenite version 0.21 as well as 0.24, because: ``` tungstenite v0.21.0 └── tokio-tungstenite v0.21.0 └── tokio-tungstenite-wasm v0.3.1 ├── iroh v0.29.0 (/home/philipp/program/work/iroh/iroh) └── iroh-relay v0.29.0 (/home/philipp/program/work/iroh/iroh-relay) ├── iroh v0.29.0 (/home/philipp/program/work/iroh/iroh) └── iroh-net-report v0.29.0 (/home/philipp/program/work/iroh/iroh-net-report) └── iroh v0.29.0 (/home/philipp/program/work/iroh/iroh) tungstenite v0.24.0 └── tokio-tungstenite v0.24.0 ├── iroh v0.29.0 (/home/philipp/program/work/iroh/iroh) └── iroh-relay v0.29.0 (/home/philipp/program/work/iroh/iroh-relay) ├── iroh v0.29.0 (/home/philipp/program/work/iroh/iroh) └── iroh-net-report v0.29.0 (/home/philipp/program/work/iroh/iroh-net-report) └── iroh v0.29.0 (/home/philipp/program/work/iroh/iroh) ``` Basically, `tokio-tungstenite-wasm` pulls in `0.21` and there's no newer version of it yet. But we updated all our dependencies including `tungstenite`, duplicating it. ## Notes & open questions <!-- Any notes, remarks or open questions you have to make about the PR. --> I want this to be temporary until we can finally switch to `fasterwebsockets` entirely once it implements [`poll`-based methods](denoland/fastwebsockets#78) (but I worry the project's maintenance is ... unclear). I checked the [tungstenite changelog](https://github.com/snapview/tungstenite-rs/blob/master/CHANGELOG.md), and it doesn't look like there's anything critical in there. The `rustls` update doesn't affect us - we don't duplicate rustls versions after this rollback. ## Change checklist - [x] Self-review. - [x] Documentation updates following the [style guide](https://rust-lang.github.io/rfcs/1574-more-api-documentation-conventions.html#appendix-a-full-conventions-text), if relevant. - ~~[ ] Tests if relevant.~~ - [x] All breaking changes documented.
No description provided.