Skip to main content
search
Error Logs

Error Log: HTTP 429 “Too many requests” (and EsRejectedExecutionException) – Overloaded cluster

By November 19, 2025No Comments

Error Log: When your client application sends too many requests too quickly, it receives an HTTP 429 status code.

HTTP/1.1 429 Too Many Requests
Content-Type: application/json

In the OpenSearch logs, you’ll see the corresponding server-side error:

[WARN ][o.o.c.c.EsRejectedExecutionException] [your-node-name] 
  rejected execution of [...] on EsThreadPoolExecutor[name=your-cluster/write, queue capacity = 200, 
  org.opensearch.common.util.concurrent.OpenSearchThreadPoolExecutor@...].
  Processor will be throttled.

Why… is this happening? An HTTP 429 (Too Many Requests) from OpenSearch means your cluster is under too much load and is actively rejecting new requests to protect itself from becoming completely unresponsive.

The underlying OpenSearch log, EsRejectedExecutionException, indicates that a specific thread pool queue on a node is full. OpenSearch uses different thread pools for different types of operations (e.g., write, search, bulk).

  • [write] thread pool: Indicates too many indexing or bulk requests.
  • [search] thread pool: Indicates too many search queries.
  • Other thread pools (get, refresh, etc.) can also be rejected, though less commonly.

When a queue is full, OpenSearch has no more capacity to process incoming requests of that type, so it rejects them. This is a crucial self-defense mechanism.

Best Practice:

  1. Implement Client-Side Backoff and Retries: This is the most important client-side strategy. If you receive a 429, wait a short period and then retry the request. Use exponential backoff (e.g., wait 1 second, then 2, then 4, etc.) to avoid overwhelming the cluster further.
  2. Slow Down Your Ingestion: If you’re seeing write rejections, reduce the rate at which your applications are sending indexing requests. Consider breaking large bulk requests into smaller ones.
  3. Optimize Your Queries: If search rejections are frequent, optimize your search queries to reduce their resource consumption. Check for expensive aggregations or scripts.
  4. Increase Cluster Capacity:
    • Add more data nodes to distribute the load.
    • Scale up existing nodes with more CPU, memory, or faster storage.
    • Consider dedicating nodes for specific roles (e.g., separate data and ingest nodes).
  5. Monitor Your Thread Pools: Use GET _cat/thread_pool?v to see the queue sizes and rejected tasks for all thread pools. This helps pinpoint the exact bottleneck.

What else can I do? Dealing with 429 errors requires a good understanding of your application’s load and your cluster’s capacity. The OpenSearch community can share strategies for scaling and optimization. For a deep dive into cluster sizing and performance tuning, contact us in The OpenSearch Slack Channel in #General.

Author