-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warning Message when insufficient space to hold multiple persistent queues on a file system could be clearer #14839
Comments
As i'm still learning these concepts, can I extend your example to check my understanding of this proposed improvement for you to review before moving forward with an implementation? In your example we configure two pipelines with persistent queues. The relevant config settings are:
PQ2
The
Currently for each of these configs we read in the
There are several shortcomings for this approach:
Proposed improvement:
What this would involve is passing through all the configs to build up all the required Implementation notes: I think from reviewing the existing methods in that class we could compute the space available on a file system as well as the space used under a given queue logstash/logstash-core/lib/logstash/util/byte_value.rb Lines 57 to 73 in 046ea1f
|
This commit refactors the `PersistedQueueConfigValidator` class to provide a more detailed, accurate and actionable warning when pipeline's PQ configs are at risk of running out of disk space. See elastic#14839 for design considerations. The highlights of the changes include accurately determining the free resources on a filesystem disk and then providing a breakdown of the usage for each of the paths configured for a queue.
This commit refactors the `PersistedQueueConfigValidator` class to provide a more detailed, accurate and actionable warning when pipeline's PQ configs are at risk of running out of disk space. See elastic#14839 for design considerations. The highlights of the changes include accurately determining the free resources on a filesystem disk and then providing a breakdown of the usage for each of the paths configured for a queue.
This commit refactors the `PersistedQueueConfigValidator` class to provide a more detailed, accurate and actionable warning when pipeline's PQ configs are at risk of running out of disk space. See elastic#14839 for design considerations. The highlights of the changes include accurately determining the free resources on a filesystem disk and then providing a breakdown of the usage for each of the paths configured for a queue.
This commit refactors the `PersistedQueueConfigValidator` class to provide a more detailed, accurate and actionable warning when pipeline's PQ configs are at risk of running out of disk space. See elastic#14839 for design considerations. The highlights of the changes include accurately determining the free resources on a filesystem disk and then providing a breakdown of the usage for each of the paths configured for a queue.
** logstash version **
Logstash 7.x >= 7.17.5
Logstash 8.x >= 8.3.0
Steps to reproduce:
When starting up, Logstash will check the total amount of space required for PQ's on a specified file system against the amount of disk left on that file system, logging a warning when the total amount of space is exceeded.
However, the warning message emitted is difficult to follow and provide the correct remediating action:
I set up a config on my laptop where I have 312Gi free on my local drive, and configured two pipelines, each with
configured and started up logstash.
I received the following warning message:
This number -
Please free or allocate 643171352576 more bytes.
- feels a little confusing as I actually need fewer bytes than that to successfully allow the PQ's to operate.The number appears to
(Total Size of required disk across all PQ) - (disk used across all PQ)
But the disk may not be dedicated to PQ and the number may be misleading.
It may be more useful to report
It may also be worth strengthening the warning to state that Logstash may fail to start if this is not resolved
The text was updated successfully, but these errors were encountered: