You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem: Memory usage keeps increasing when a client has subscribed to too many channels.
Scale:
Subscriptions: 3691 channels
Message frequency: 0 to 4 messages per second per channel
Peak network download rate: ~400 Mbps
Estimated 99th percentile latency of callback: ~100 microseconds
Environment:
.NET 6.0
StackExchange.Redis 2.7.33
ConnectionMultiplexer configuration:
ClientName
ReconnectRetryPolicy
SyncTimeout
AsyncTimeout
Attempted Solutions and Findings
Garbage Collection: Forcing GC provides temporary relief but doesn't solve the issue.
Memory Profiling: Survived objects are primarily Byte[] in ChannelMessageQueue.
Redis Server Queue Limit: Configuring a max size for the queue at the Redis Server doesn't help.
Source Code Digging: I see that ChannelMessageQueue creates an unbounded channel here. So we believe the SE Redis client is fetching from the server faster than our callbacks can process, thus accumulating a backlog on the client side.
Process Splitting: The more processes we run for the same amount of subscriptions, the less combined memory usage and the higher CPU utilization rate.
Here are some numbers at 80% of the scale mentioned above.
As for the row for 4 processes, the memory growth only stops because the server queue has filled up and the server starts to disconnect the client.
Number of Processes
Peak Combined Memory (GB)
Peak Thread Count
4
37
302
5
19
337
8
13.3
507
10
12.7
614
40
12.2
2184
...
...
...
Questions and Considerations
Memory Usage Limitation: What strategies can we employ to limit memory usage effectively? We prefer the client does not fetch items from the server fast enough and the server disconnects the client after the queue at the server exceeds a pre-configured size.
Thread Count Optimization: How can we minimize the number of threads while maintaining performance?
Allocation Optimization:
Current: StackExchange.Redis allocates memory on the heap for new payloads.
Hypothesis: Processing could be faster by reducing/eliminating heap allocation.
Potential approach: Direct processing of messages from OS socket without an intermediate queue buffer.
We appreciate any insights or recommendations to address these challenges. Thank you for your assistance!
The text was updated successfully, but these errors were encountered:
Background
ConnectionMultiplexer
configuration:Attempted Solutions and Findings
Garbage Collection: Forcing GC provides temporary relief but doesn't solve the issue.
Memory Profiling: Survived objects are primarily
Byte[]
inChannelMessageQueue
.Redis Server Queue Limit: Configuring a max size for the queue at the Redis Server doesn't help.
Source Code Digging: I see that ChannelMessageQueue creates an unbounded channel here. So we believe the SE Redis client is fetching from the server faster than our callbacks can process, thus accumulating a backlog on the client side.
Upvoting a similar question on Stack Overflow.
Process Splitting: The more processes we run for the same amount of subscriptions, the less combined memory usage and the higher CPU utilization rate.
Here are some numbers at 80% of the scale mentioned above.
As for the row for 4 processes, the memory growth only stops because the server queue has filled up and the server starts to disconnect the client.
Questions and Considerations
We appreciate any insights or recommendations to address these challenges. Thank you for your assistance!
The text was updated successfully, but these errors were encountered: