You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No error message notification displayed in Overview
Current Behavior
After data node is setup, the following error message is displayed in the Overview:
Elasticsearch nodes disk usage above low watermark
(triggered a day ago)
There are Elasticsearch nodes in the cluster running out of disk space, their disk usage is above the low watermark. For this reason Elasticsearch will not allocate new shards to the affected nodes. The affected nodes are: [127.0.1.1]
Possible Solution
I am new to datanodes, but I think the notification is being generated in error (i.e., there is nothing wrong with the data node itself). I compared the watermark configuration to my production server (running Graylog without data node on OpenSearch), and the configurations are the same:
Both my production and my test server have the same disk size, OS, etc. Details below.
Dismissing the notification will clear it, but it will return a few minutes later.
Steps to Reproduce (for bugs)
Create new LXC VM with Ubuntu
Follow Graylog documentation for install. Install data node and graylog onto the same server.
Complete preflight and configure single data node with default settings (i.e. no custom CA, etc.)
Wait a few minutes and the watermark error notification will appear in System/Overview
Context
I think the error can be ignored. Below is a DF screenshot of my test environment. Graylog is installed on the partition with 32% used (68% free) space.
My test server has 20GB of disk allocated and no inputs configured. This is the same disk size as my homelab production server (OpenSearch with 4 inputs and 30 days of data) and that server runs without error.
Your Environment
Graylog Version: 6.1.1 enterprise
Java Version: Default
OpenSearch Version: Default
MongoDB Version: 7.0x
Operating System: 22.04 LTS. I am using Proxmox as my hypervisor.
Browser version: Safari 18.1
The text was updated successfully, but these errors were encountered:
julsssark
changed the title
Clean 6.1.1 single-node datanode install results in a watermark error even though disk is on 32% used
Clean 6.1.1 single-node datanode install results in a watermark error even though disk is 32% used
Nov 3, 2024
Update: I enabled all 4 of my syslog inputs and began loading data into my 6.1.2 data node version of Graylog. Over the last week that's added 500megs of data to my data node. I still have the low watermark error but otherwise everything is working fine.
Update2: I now have a high watermark error. I am going to let it keep running and see what happens. The disk still has plenty of free space.
Expected Behavior
No error message notification displayed in Overview
Current Behavior
After data node is setup, the following error message is displayed in the Overview:
Elasticsearch nodes disk usage above low watermark
(triggered a day ago)
There are Elasticsearch nodes in the cluster running out of disk space, their disk usage is above the low watermark. For this reason Elasticsearch will not allocate new shards to the affected nodes. The affected nodes are: [127.0.1.1]
Possible Solution
I am new to datanodes, but I think the notification is being generated in error (i.e., there is nothing wrong with the data node itself). I compared the watermark configuration to my production server (running Graylog without data node on OpenSearch), and the configurations are the same:
"disk":{"threshold_enabled":"true","watermark":{"flood_stage":"95%","high":"90%","low":"85%","enable_for_single_data_node":"false"}
Both my production and my test server have the same disk size, OS, etc. Details below.
Dismissing the notification will clear it, but it will return a few minutes later.
Steps to Reproduce (for bugs)
Context
I think the error can be ignored. Below is a DF screenshot of my test environment. Graylog is installed on the partition with 32% used (68% free) space.
My test server has 20GB of disk allocated and no inputs configured. This is the same disk size as my homelab production server (OpenSearch with 4 inputs and 30 days of data) and that server runs without error.
Your Environment
The text was updated successfully, but these errors were encountered: