Skip to main content
search
Error Logs

Error log: “too_many_buckets_exception” – The aggregation explosion

By November 21, 2025No Comments

Error Log: You are running a search with aggregations (e.g., a terms or histogram aggregation), but the query fails with this error.

JSON

None
{

  "error": {

    "root_cause": [

      {

        "type": "illegal_argument_exception",

        "reason": "Trying to create too many buckets. 

                   Must be less than or equal to: [10000]...

                   This limit can be set by the cluster setting

                   [search.max_buckets]."

      }

    ],

    "type": "search_phase_execution_exception",

    // ...

  },

  "status": 503

}

Why… is this happening? This is a critical safeguard to protect your cluster’s memory and stability. An aggregation, especially a terms aggregation on a high-cardinality field (like user_id or ip_address), can “explode” and try to generate millions or even billions of unique buckets.

Each bucket consumes memory (JVM heap) on the coordinating node. Without a limit, a single bad aggregation query could instantly cause an OutOfMemoryError (Blog #26) and crash your node.

To prevent this, OpenSearch has a global setting, search.max_buckets, which defaults to 10,000. This error means your aggregation query (or combination of nested aggregations) tried to generate more than 10,000 buckets, so OpenSearch aborted the query.

Best Practice:

  1. Don’t increase the limit (if possible): The fix is not to simply increase the max_buckets limit. This is like hiding a warning light. The real fix is to rethink your query.
  2. Filter first: Add a query filter to your search to drastically reduce the number of documents before you aggregate them. (e.g., filter on a time range or a specific customer).
  3. Use .keyword: Ensure you are aggregating on a keyword field, not an analyzed text field.
  4. Use cardinality: If you just want to know how many unique users there are, use a cardinality aggregation. It’s an approximation but uses far less memory.
  5. Increase limit (last resort): If you know your query is safe and you really need 15,000 buckets, you can (cautiously) increase the limit. This is dangerous.
    JSON
None

PUT /_cluster/settings

{

  "transient": {

    "search.max_buckets": 15000

  }

}

What else can I do? Trying to find the “top N” items from a high-cardinality field? This is a common challenge. Ask the OpenSearch community for advice on query design and performance. For direct support on cluster stability, contact us in The OpenSearch Slack Channel in #General.

Author