You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the copy chunk size is hard-coded in the API as 10 000 records. The efficiency of the COPY functionality is dependent on choosing an appropriate number of records to copy in at once based on the total number of records being loaded. This can be hard to determine, so the user may want to tweak the size on a test site first. They may also choose the size based on available space on their hard drive for the temporary file that gets copied into the database.
Lacey suggested the following in PR #19 (referring to $copy_chunk_size = 10000; in the API):
I would allow this to be set via variable_get/set().
Change this line to $copy_chunk_size = variable_get('genotypes_loader_cp_chunk_size', 10000);
Set via variable_set() in the first drush function after exposing an option to the user. This way you don't have to worry about trying to pass the value through the chain of functions while still providing a way for your user to change this value per file.
The text was updated successfully, but these errors were encountered:
Currently, the copy chunk size is hard-coded in the API as 10 000 records. The efficiency of the COPY functionality is dependent on choosing an appropriate number of records to copy in at once based on the total number of records being loaded. This can be hard to determine, so the user may want to tweak the size on a test site first. They may also choose the size based on available space on their hard drive for the temporary file that gets copied into the database.
Lacey suggested the following in PR #19 (referring to
$copy_chunk_size = 10000;
in the API):The text was updated successfully, but these errors were encountered: