You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 16, 2023. It is now read-only.
Without any change on our backend servers which run this code in a container, this out-of-memory (OOM) exception on our alerting started happening on August 24, 2020 -- surpassing the 220 MB per container threshold, going up to almost 1 GB before being evicted:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
We've upgraded the kin-node SDK, but it didn't have any effect.
We believe this was triggered via some backend update on the Kin Agora side, which didn't play nicely with what we have installed. We have not been able to trace what is causing the memory leak.
On the monitoring graphs, the containers exhibit classic sawtooth memory usage, which indicates the container is increasing in memory past the 220 MB threshold we had set up to 1 GB per container, until the container gets evicted by the cluster manager, and then a new instance is created. Before Aug 24, 2020 we did not see this behavior, as we've had this monitoring/alerting 220 MB memory usage threshold policy in place since 2019.
here is the yarn.lock which we have for the container:
Without any change on our backend servers which run this code in a container, this out-of-memory (OOM) exception on our alerting started happening on August 24, 2020 -- surpassing the 220 MB per container threshold, going up to almost 1 GB before being evicted:
We've upgraded the kin-node SDK, but it didn't have any effect.
We believe this was triggered via some backend update on the Kin Agora side, which didn't play nicely with what we have installed. We have not been able to trace what is causing the memory leak.
On the monitoring graphs, the containers exhibit classic sawtooth memory usage, which indicates the container is increasing in memory past the 220 MB threshold we had set up to 1 GB per container, until the container gets evicted by the cluster manager, and then a new instance is created. Before Aug 24, 2020 we did not see this behavior, as we've had this monitoring/alerting 220 MB memory usage threshold policy in place since 2019.
here is the yarn.lock which we have for the container:
The text was updated successfully, but these errors were encountered: