Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does amount of messages handled per seconds change in comparison to number of clients? #50

Open
elhigu opened this issue Mar 18, 2017 · 3 comments

Comments

@elhigu
Copy link

elhigu commented Mar 18, 2017

I didn't understand from reading the article if the total number of messages / second grows linearly or exponentially depending on number of clients.

@jackc
Copy link
Contributor

jackc commented Mar 18, 2017

I might be misunderstanding what you are asking, but I'll try to answer.

The test sends a broadcasts to all connected clients, so assuming it was able to maintain the same broadcast rate, messages per second would grow linearly with number of clients. But in practice it wouldn't be able to maintain the same broadcast rate. That's part of what the benchmark was examining -- how many clients can be subscribed and still have acceptable latency at a given broadcast rate.

A better way of thinking about it may be number of messages sent vs. number of of connected clients is in terms of how many messages are sent for each test group (e.g. 10,000 clients) That number will grow linearly (e.g. A test run will send 2x as many messages at 20,000 clients as 10,000 -- even though it may take longer to do so.)

@elhigu
Copy link
Author

elhigu commented Mar 19, 2017

Thanks, that was exactly what I wanted to know. So every new client is not communicating wit very old client, but only with the server.

I was just impressed that node was able to handle 13k clients with single thread and really small memory footprint while e.g. elixir and go handled only like 24k clients with 8 hardware threads. I still don't understand how is that possible that node would perform 2-4x better (13k each thread) if one would spawn just 8 processes and add load-balancer to front to handle messaging (I know having 8 processes which handles just their own slice of connections is not the same, but it is interesting anyways)...

Would have been nice to have also numbers about how much resources of the hardware was used (in addition to memory also network and CPU) on each server implementation when running max clients.

@jackc
Copy link
Contributor

jackc commented Mar 20, 2017

Check out https://github.com/hashrocket/websocket-shootout/tree/master/results for the raw results including details on the server hardware. It's been a while since I was working on it, but at least in the second round of testing, node was run in a cluster. IIRC, for the highest performance servers performances were bound more by network and OS than CPU usage, so some platforms really didn't gain much by using multi-threading or multi-processing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants