Link Search Menu Expand Document Documentation Menu

OpenSearch Benchmark

OpenSearch Benchmark is a macrobenchmark utility provided by the OpenSearch Project. You can use OpenSearch Benchmark to gather performance metrics from an OpenSearch cluster for a variety of purposes, including:

  • Tracking the overall performance of an OpenSearch cluster.
  • Informing decisions about when to upgrade your cluster to a new version.
  • Determining how changes to your workflow—such as modifying mappings or queries—might impact your cluster.

OpenSearch Benchmark can be installed directly on a compatible host running Linux and macOS. You can also run OpenSearch Benchmark in a Docker container. See Installing OpenSearch Benchmark for more information.

Concepts

Before using OpenSearch Benchmark, familiarize yourself with the following concepts:

  • Workload: The description of one or more benchmarking scenarios that use a specific document corpus from which to perform a benchmark against your cluster. The document corpus contains any indexes, data files, and operations invoked when the workflow runs. You can list the available workloads by using opensearch-benchmark list workloads or view any included workloads inside the OpenSearch Benchmark Workloads repository. For information about building a custom workload, see Creating custom workloads.

  • Pipeline: A series of steps before and after a workload is run that determines benchmark results. OpenSearch Benchmark supports three pipelines:
    • from-sources: Builds and provisions OpenSearch, runs a benchmark, and then publishes the results.
    • from-distribution: Downloads an OpenSearch distribution, provisions it, runs a benchmark, and then publishes the results.
    • benchmark-only: The default pipeline. Assumes an already running OpenSearch instance, runs a benchmark on that instance, and then publishes the results.
  • Test: A single invocation of the OpenSearch Benchmark binary.
350 characters left

Have a question? .

Want to contribute? or .