Skip to main content
search
Error Logs

Error log: BulkIndexError – The Batch Failure

By November 21, 2025No Comments

Error log: This error doesn’t appear in your OpenSearch logs. Instead, it’s an exception your client application receives when one or more documents in a bulk request fail.
In your application logs (e.g., Logstash, or your custom code):

org.opensearch.action.bulk.BulkIndexError: 
  [1] document(s) failed to index. 
  [0] document(s) had conflicts. 
  [index-name/document-id] [index] failed 
  - failure: "mapper_parsing_exception: failed to parse field [some_field]..."

Or in the JSON response to your _bulk API call:
JSON

{
  "took" : 30,
  "errors" : true, // <-- This tells you something failed
  "items" : [
    {
      "index" : { // This item succeeded
        "_index" : "my-index", "_id" : "1", "status" : 201, ... 
      }
    },
    {
      "index" : { // <-- This item failed
        "_index" : "my-index", "_id" : "2", "status" : 400,
        "error" : {
          "type" : "mapper_parsing_exception", // <-- The REAL error
          "reason" : "failed to parse field [age] of type [integer]..."
        }
      }
    }
  ]
}

Why… is this happening? A BulkIndexError is a high-level summary. It simply means “I tried to process your batch of 1000 documents, and some of them failed.” It is not the root problem.

The errors: true flag in the JSON response is your cue to look inside the “items” array. You must iterate through each item in the response to find the specific documents that failed.

The error for each failed item will be one of the other errors in this series, such as:

  • mapper_parsing_exception: (Most common) The data you sent for a document didn’t match the index mapping.
  • version_conflict_engine_exception: You tried to update a document that was changed by another process.
  • cluster_block_exception: The cluster is read-only (e.g., low disk space).

Best practice:

  1. Parse the response: Your application code must check the errors field in the bulk response. If it’s true, you must loop through the items array and check each item’s status or error field.
  2. Log individual errors: Log the specific reason for each failed item. Logging only “BulkIndexError” is useless for debugging.
  3. Implement a retry strategy: For retryable errors (like a temporary 429 rejection), send the failed documents back in a new bulk request (with exponential backoff). For permanent errors (like mapper_parsing_exception), send the failed documents to a “dead-letter queue” or log them for manual review.

What else can I do? Building a robust bulk indexing process with error handling can be tricky. If you’re seeing persistent BulkIndexError messages and aren’t sure how to parse the response, ask the OpenSearch community! You can also contact us for help at help@opensearchsoftwarefoundation.org.

Author