You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generally, when there are many (e.g. 10+) streaming jobs running in one RisingWave instance, it's no longer a good idea to use full CPU cores for all fragments. This proposal is trying to address this problem.
Recently, we found several issues related to the number of actors per CN:
Due to many reasons, there is a fixed memory front print for each actor. The more actors, the less memory space for our streaming cache. In longevity test, the cached data is nearly zero.
Design
I think we could introduce a soft limit and a hard limit for each CN
When the number of actors is lower than the soft limit, nothing happen. This should be the case of 90% users.
When the number of actors is higher than the soft limit but below the hard limit, when the users creating new materialized views/sinks or scaling an existing job, a notice message will be presented to users to urge them scale in some streaming queries.
When the number of actors is higher than hard limit, RisingWave will refuse the request and return an error.
In the notice message, the users are encouraged to use the alter command to set a smaller parallelism on existing streaming jobs.
Implementation
The implementation is trivial, but we need to carefully pick a default threshold. For example,
Should we limit the stateful actors or all actors? Asked this majorly because we now have multiple actors for a table according to the new design of DML. cc @st1page
The text was updated successfully, but these errors were encountered:
This issue has been open for 60 days with no activity.
If you think it is still relevant today, and needs to be done in the near future, you can comment to update the status, or just manually remove the no-issue-activity label.
You can also confidently close this issue as not planned to keep our backlog clean.
Don't worry if you think the issue is still valuable to continue in the future.
It's searchable and can be reopened when it's time. 😄
Motivation
Generally, when there are many (e.g. 10+) streaming jobs running in one RisingWave instance, it's no longer a good idea to use full CPU cores for all fragments. This proposal is trying to address this problem.
Recently, we found several issues related to the number of actors per CN:
HummockUploader
failed to make the new version in onecheckpoint_interval
, which caused barriers to pile up. Fixed by perf(storage): simplify table watermark index #15931.Design
I think we could introduce a soft limit and a hard limit for each CN
In the notice message, the users are encouraged to use the
alter
command to set a smaller parallelism on existing streaming jobs.Implementation
The implementation is trivial, but we need to carefully pick a default threshold. For example,
TBD
The text was updated successfully, but these errors were encountered: