You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we are migrating a fairly large set of zabbix data from one server to another (about 1TB) migrating the data is no issue but our bottleneck appears to be specifically the dump of data from the source server to a csv rather than the write out.
can this tool read CSV files that are still being written to and what happens when it gets to the end?
can the tool also do some kind of resume function where it starts where it left off?
The text was updated successfully, but these errors were encountered:
I would say you can tail | timescaledb-parallel-copy and it will keep reading until you terminate the program. I recommend you to give it a try.
about resuming on where it was left... I would say no. The tool simply reads a file and puts it into the database. It doesn't handle cases such as partial copy or resuming after a failure.
Said that, the tool does report the number of successfully copied rows, so you may be able to resume manually with that information.
we are migrating a fairly large set of zabbix data from one server to another (about 1TB) migrating the data is no issue but our bottleneck appears to be specifically the dump of data from the source server to a csv rather than the write out.
can this tool read CSV files that are still being written to and what happens when it gets to the end?
can the tool also do some kind of resume function where it starts where it left off?
The text was updated successfully, but these errors were encountered: