Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very poor performance over Viasat #1

Open
bizzbyster opened this issue Jan 14, 2022 · 5 comments
Open

Very poor performance over Viasat #1

bizzbyster opened this issue Jan 14, 2022 · 5 comments

Comments

@bizzbyster
Copy link
Contributor

bizzbyster commented Jan 14, 2022

Did you ever test this over a geostationary satellite link? Wondering what kind of performance you see. I am seeing speedtest.net results less than 1 Mbps down and up on a link that is ~40 Mbps down and ~5 Mbps up without the qpep client and server. I wonder if there are quic transport settings I can tweak. Thoughts?

@virtuallynathan
Copy link
Owner

I never did test it, as I was able to get Starlink. I would be happy to help you set it up, I'd be curious to know how it works in the real world. It should help quite a bit...

@bizzbyster
Copy link
Contributor Author

Ah ok. I will share my setup and the performance I am seeing here and cross post in the original repo here: ssloxford/qpep#8. Thanks for your interest.

@virtuallynathan
Copy link
Owner

Oh, I see, you've already got it running. Have you tried a single-threaded wget with and without qpep from the AWS instance you are running it on?

@bizzbyster
Copy link
Contributor Author

Hi @virtuallynathan,

To put this debug process in one place, I'm posting more information about the issue in the original qpep repo here -- hope you will continue to follow and participate as interested there.

Thanks!!

@mfoxworthy
Copy link
Collaborator

mfoxworthy commented Apr 19, 2022

I have this running in a simulated environment and I can achieve 14Mbps with 720ms RTT. I am not dealing with schedulers and BER, so it's a very “pure” test. On that same link running a TCP Proxy with BBR CCA, I can achieve 70Mbps. There are a few artifacts that I noticed. First, the socket QPEP opens on the client side sets the PSH bit on every packet. This results in an ACK and never opens the window to the advertised size from the browser. I don't know if that is because the buffer in QPEP really is draining and there really is nothing left to send. It does fill up the MSS, and it does send large packets that require segmentation and reassembly.

My guess is that QUIC-GO is using the RENO CCA, as mentioned in RFC 9000. QUIC still using acknowledgments but done differently. I am guessing this is the case because as I add or remove latency, I can see the throughput product increase and decrease. My conclusion is that QUIC w/HTTP3 will improve the snappiness of web pages that use TLS. Without a CCA that aggressively opens the transmission window before an ACK is expected, it's just as good as TCP/RENO or Cubic for that matter, for high throughput.

I am going to do more testing while adjusting the acknowledgment settings in the server to see if I can improve the throughput product. However, PEPsal with BBR and an LTE side channel for DNS queries performs much better than QPEP. I'll add some graphs and numbers to the post when I decide how I want to present them.

One last note, I did update QPEP to use the QUIC-GO v0.27 tag. This updates QUIC to RFC9000 version 1 and no longer is in draft status. I also set the TCP_NODELAY setting for the client side. I'll send a PR with those changes. It also cross-compiled perfectly for arm and arm64, just FYI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants