Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running the tool with no downtime #59

Open
redserpent7 opened this issue Oct 28, 2021 · 0 comments
Open

Running the tool with no downtime #59

redserpent7 opened this issue Oct 28, 2021 · 0 comments

Comments

@redserpent7
Copy link

Hi,

I am currently

I have a neo4j 3.5.23 cluster that consists of 3 core servers and 2 reader servers.

Recently we deleted close to 12M nodes with their relations and the db size increased dramatically, the db is currently around 120GB with 54M nodes and 456M relations.

I tried copying the DB on one of the reader instances with keep-node-ids set to false and it took around 3 hours to complete.

Now my understanding is that the entire database (all cluster nodes) need to be offline in order to do copy the store, not to mention re-applying the indexes which basically means that our production environment will be down for over 3 hours.

I was wondering if there is a way to mitigate this? what are my options to prevent going completely offline or at the most minimize the downtime to say below 30 mins if at all possible?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant