Skip to main content
search
Error Logs

Error Log: Circuit_breaking_exception – Data too large!

By November 19, 2025No Comments

Error Log: This error occurs when OpenSearch detects that an operation (often a search query with aggregations) would consume too much memory, potentially causing an OutOfMemoryError and crashing the node.

JSON

{
  "error" : {
    "root_cause" : [
      {
        "type" : "circuit_breaking_exception",
        "reason" : "[parent] Data too large, data for [] would be [123456789b] which is larger than the limit of [67108864b]",
        "bytes_wanted" : 123456789,
        "bytes_limit" : 67108864
      }
    ],
    "type" : "circuit_breaking_exception",
    "reason" : "[parent] Data too large, data for [] would be [123456789b] which is larger than the limit of [67108864b]"
  },
  "status" : 429
}

Why… is this happening? OpenSearch uses circuit breakers to prevent operations from consuming too much memory and crashing the Java Virtual Machine (JVM). When a query’s estimated memory usage exceeds a configured limit, the circuit breaker “trips,” aborting the operation and returning this exception.

The most common reasons are:

  1. [parent] Circuit Breaker: This is the most common. A query (often with complex aggregations, deeply nested documents, or large terms aggregations) tries to load too much data into the JVM heap.
  2. [fielddata] Circuit Breaker: This usually means you’re trying to sort or aggregate on a text field for which fielddata is enabled. fielddata loads all unique terms for a field into memory, which can be extremely expensive for high-cardinality text fields.
  3. [request] Circuit Breaker: An individual request (like a bulk request or a very large search request) is too big.

Best Practice:

  1. Optimize Your Queries/Aggregations:
    • Reduce size: For terms aggregations, reduce the size parameter if you don’t need all results.
    • Filter Before Aggregating: Apply filters to your data before performing aggregations to reduce the dataset.
    • Avoid High-Cardinality terms Aggregations on Text Fields: If you need to aggregate on a text-like field, ensure it has a keyword sub-field and aggregate on that.
    • Pagination: Use search_after or scroll APIs for deep pagination instead of from/size on very large result sets.
  2. Do Not Enable fielddata on text Fields: This is almost always the wrong solution. Use .keyword sub-fields for aggregations and sorting.
  3. Increase JVM Heap (with caution): You can increase the JVM heap size (ES_JAVA_OPTS="-Xms..." "-Xmx...") in jvm.options, but be aware that larger heaps can lead to longer garbage collection pauses. A general rule of thumb is to allocate no more than 50% of your total RAM to the heap, up to ~30-32GB.
  4. Scale Out: Add more data nodes to your cluster. This distributes the memory load across multiple JVMs.

What else can I do? Are your CircuitBreakingException errors persistent? Share your problematic queries and mappings with the OpenSearch community to get specific optimization advice. For expert help with cluster sizing and query optimization, contact us in The OpenSearch Slack Channel in #General.

Author