Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address new CPU bottlenecks #1905

Open
larseggert opened this issue May 15, 2024 · 0 comments
Open

Address new CPU bottlenecks #1905

larseggert opened this issue May 15, 2024 · 0 comments

Comments

@larseggert
Copy link
Collaborator

larseggert commented May 15, 2024

For the server, core::iter::traits::iterator::Iterator::fold called from neqo_transport::connection::Connection::input_path takes ~15% of cycles. I guess this is related to the SentPackets data structure. We should work on that.

neqo-neqo-reno-pacing.server.svg
neqo-neqo-reno-pacing server

For the client, no clear new bottleneck emerges. That's not really surprising, because our server is the bottleneck. The client only uses 50-66% of a core when the server maxes it's core out.

neqo-neqo-reno-pacing.client.svg
neqo-neqo-reno-pacing client

Running the neqo client against the msquic server (which makes our client the bottleneck), shows input_path taking quite a bit more time than above. More surprisingly, the graphs appear quite different overall.

neqo-msquic-cubic-nopacing.client.svg
neqo-msquic-cubic-nopacing client

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant