Skip to main content
search
Error Logs

Error log: “OutOfMemoryError: Java heap space” – The node crash

By November 21, 2025No Comments

Error log: This isn’t a clean JSON error. It’s a fatal error from the Java Virtual Machine (JVM) that you’ll find in your opensearch.log or gc.log files, usually followed by the node crashing.

None
ERROR: Enormous heap memory setting detected.

java.lang.OutOfMemoryError: Java heap space
Dumping heap to /var/lib/opensearch/java_pid1234.hprof ...

Heap dump file created [123456789 bytes in 20.123s]

Or:

None
[ERROR][o.o.b.OpenSearchUncaughtExceptionHandler] [your-node-name] 

Why… is this happening? This is one of the most serious errors you can encounter. It means the OpenSearch process (a Java application) tried to allocate more memory than the maximum amount it was given (its “heap size”), and it failed. The node will almost always become unresponsive and die.

This is not the same as a CircuitBreakingException. A circuit breaker prevents an OutOfMemoryError (OOM) by stopping a bad query. An OOM means the circuit breakers failed, or the memory pressure came from something else.

Common causes:

  1. Heap Size Too Small: Your node is simply too busy for the heap you’ve given it (e.g., you have a 1GB heap for a node handling heavy aggregations).
  2. Circuit Breakers Too High: You (or a plugin) may have raised the circuit breaker limits too high (e.g., indices.breaker.total.limit: 99%), which stops them from protecting the heap.
  3. Memory Leaks: A bug in OpenSearch or, more commonly, a third-party plugin, is consuming memory and not releasing it.
  4. Fielddata: You enabled fielddata=true on a high-cardinality text field, which loaded massive amounts of data into the heap, bypassing the normal circuit breakers.
  5. Extremely Large Requests: A single, massive request (like a bulk request with millions of documents or a terms aggregation on billions of items) can sometimes find a way to overwhelm the heap.

Best Practice:

  1. Set Heap Size Correctly: Edit config/jvm.options. Set -Xms (min) and -Xmx (max) to the same value. This should be 50% of your server’s total RAM, but no more than ~30-32GB.
  2. Leave 50% RAM for OS: The other 50% of your server’s RAM is not wasted. The operating system needs it for the file system cache (Lucene’s “page cache”), which is critical for fast search performance.
  3. Analyze the Heap Dump: The java_pid1234.hprof file (heap dump) is your key to debugging. Use tools like Eclipse MAT or VisualVM to open this file (which can be huge) and see exactly which objects were consuming all the memory. This will often point directly to a bad query or a plugin.
  4. Check for fielddata=true: Run GET /_cluster/settings and GET /*/_settings to look for any indices where fielddata is enabled on text fields. Disable it and use a .keyword field instead.

What else can I do? An OutOfMemoryError is serious. If you can’t read the heap dump, the community forums are a good place to ask for help, but you’ll need to provide details about your cluster, heap size, and what was happening at the time. For expert heap dump analysis, contact us in The OpenSearch Slack Channel in #General.

Author