Skip to main content
search
Error Logs

Error log: Disk “high” watermark exceeded – The final warning

By November 21, 2025No Comments

Error log: This is not an error, but a critical warning in your opsearch.log file. It’s the step before your cluster locks up (Blog #5).

None
[WARN ][o.o.c.r.a.DiskThresholdMonitor] [your-node-name] 

  high disk watermark [90%] exceeded on [node-id] [node-name]

  [data_path] free: [10gb/100gb], 

  relocating shards...

Why… is this happening? This log message is OpenSearch’s self-preservation system telling you a node is critically low on disk space. It’s now taking active, and often disruptive, measures to fix it.

OpenSearch has three disk watermarks:

  1. Low (default 85%): OpenSearch stops allocating new shards to this node.
  2. High (default 90%): This warning. OpenSearch will actively try to relocate existing shards off this node and onto other, less-full nodes.
  3. Flood (default 95%): OpenSearch triggers the ClusterBlockException (Blog #5) and makes all indices on this node read-only to prevent data corruption.

This log means your cluster is now busy moving large amounts of data (relocating shards), which will cause high I/O and network traffic, hurting your cluster’s performance.

Best practice:

1.  Act immediately: This is your last warning before the cluster becomes read-only.

2. Identify the fullest nodes: Run GET /_cat/allocation?v&s=disk.percent:desc to see a list of all nodes, sorted by disk usage.

3. Delete old data (Fastest Fix): The quickest way to get out of the “high” watermark is to delete old, unneeded indices.
Bash

None
DELETE /my-old-logs-2025-01-01

4. Implement index state management (ISM): This is the permanent solution. Set up ISM policies to automatically delete or roll over indices after a certain time (e.g., delete logs older than 30 days).

5. Scale your cluster: If you can’t delete data, you must add more data nodes or increase the disk size on your existing nodes.

What else can I do? Unsure which data to delete or how to set up an ISM policy? This is a critical part of managing a healthy cluster. Ask the OpenSearch community for examples, or contact us in The OpenSearch Slack Channel in #General.

Author