Error Log: This error occurs when OpenSearch detects that an operation (often a search query with aggregations) would consume too much memory, potentially causing an OutOfMemoryError and crashing the node.
JSON
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [] would be [123456789b] which is larger than the limit of [67108864b]",
"bytes_wanted" : 123456789,
"bytes_limit" : 67108864
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [] would be [123456789b] which is larger than the limit of [67108864b]"
},
"status" : 429
}
Why… is this happening? OpenSearch uses circuit breakers to prevent operations from consuming too much memory and crashing the Java Virtual Machine (JVM). When a query’s estimated memory usage exceeds a configured limit, the circuit breaker “trips,” aborting the operation and returning this exception.
The most common reasons are:
[parent]Circuit Breaker: This is the most common. A query (often with complex aggregations, deeply nested documents, or largetermsaggregations) tries to load too much data into the JVM heap.[fielddata]Circuit Breaker: This usually means you’re trying to sort or aggregate on atextfield for whichfielddatais enabled.fielddataloads all unique terms for a field into memory, which can be extremely expensive for high-cardinality text fields.[request]Circuit Breaker: An individual request (like a bulk request or a very large search request) is too big.
Best Practice:
- Optimize Your Queries/Aggregations:
- Reduce
size: Fortermsaggregations, reduce thesizeparameter if you don’t need all results. - Filter Before Aggregating: Apply filters to your data before performing aggregations to reduce the dataset.
- Avoid High-Cardinality
termsAggregations on Text Fields: If you need to aggregate on a text-like field, ensure it has akeywordsub-field and aggregate on that. - Pagination: Use
search_afteror scroll APIs for deep pagination instead offrom/sizeon very large result sets.
- Reduce
- Do Not Enable
fielddataontextFields: This is almost always the wrong solution. Use.keywordsub-fields for aggregations and sorting. - Increase JVM Heap (with caution): You can increase the JVM heap size (
ES_JAVA_OPTS="-Xms..." "-Xmx...") injvm.options, but be aware that larger heaps can lead to longer garbage collection pauses. A general rule of thumb is to allocate no more than 50% of your total RAM to the heap, up to ~30-32GB. - Scale Out: Add more data nodes to your cluster. This distributes the memory load across multiple JVMs.
What else can I do? Are your CircuitBreakingException errors persistent? Share your problematic queries and mappings with the OpenSearch community to get specific optimization advice. For expert help with cluster sizing and query optimization, contact us in The OpenSearch Slack Channel in #General.