OpenSearch Core

OpenSearch Core is a high-performance search engine built on the robust indexing and search capabilities of Apache Lucene.

It allows you to ingest large volumes of content in diverse formats using OpenSearch Data Prepper, index complex multidimensional data, and perform efficient searches that return highly accurate results.

You can then explore and visualize your data seamlessly with OpenSearch Dashboards.

A powerful and versatile search and analytics engine

Robust search and indexing based on Apache Lucene

Powerful, open, designed to scale

OpenSearch is a fast, flexible, open-source suite for search and analytics. From log data to real-time monitoring, it delivers enterprise-grade features like security, alerting, and machine learning—without vendor lock-in. Its distributed design scales easily and keeps you in control.

Built on Apache Lucene

 Apache Lucene™ is a high-performance, full-featured search engine library written entirely in Java. It is a technology suitable for nearly any application that requires structured search, full-text search, faceting, nearest-neighbor search across high-dimensionality vectors, spell correction or query suggestions.

OpenSearch Core components

Fault tolerant, scalable components

The OpenSearch Core architecture is made up of clusters, nodes, indexes, shards, and documents. At the top level is the OpenSearch cluster, a distributed network of nodes, each responsible for different cluster operations based on its type. Data nodes are responsible for storing indexes—logical groupings of documents—and handling tasks like data ingestion, search, and aggregation.

Each index is divided into shards, which include both primary and replica data. Shards are distributed across multiple machines, enabling horizontal scaling for improved performance and efficient use of storage.

OpenSearch’s sharding strategy

A diagram of OpenSearch clusters

OpenSearch Clusters are highly scalable and exceptionally resilient.

Scalable by design

 Handle growing data volumes effortlessly with horizontal scaling and distributed architecture.

Open and flexible

 Fully open-source with support for custom plugins, fine-tuned queries, and community-driven innovation.

Real-time insights

 Run lightning-fast indexing, searches, and analytics on streaming or historical data for instant visibility.

Secure and extensible

 Built-in security features, role-based access control, audit logging, and compliance.

Key features

Lexical search

Perform traditional keyword-based queries using the Okapi BM25 algorithm.

Semantic search

Incorporate text embedding models and advanced vector search capabilities for interpreting the meaning and context of queries.

Hybrid search

Combine keyword and semantic search to improve search relevance and accuracy.

Vector engine

Execute k-NN searches and manage vector data alongside other data types with OpenSearch Core’s integrated vector database.

Intuitive dashboards

Render your search data and use natural language instructions to create visualizations with OpenSearch Dashboards.

Dynamic scaling

Supports enterprise workloads using both horizontal and vertical scaling, as well as native vector capabilities that can handle billions of vectors.

Extensive plugins

Add new features and capabilities to OpenSearch core using its expansive library of pre-built plug-ins, with more added on a regular basis.

Benefits

Established and trusted

OpenSearch Core is built on Apache Lucene’s scalable, high-performance search library, continuously updated to meet evolving performance requirements.

Flexible and adaptable

OpenSearch Core is highly customizable, making it adaptable to a diverse range of search applications and AI use cases.

Extensible and open

 With dozens of plugins available, and more supported with new releases, you can easily add new features or build your own.

Getting started

Check out the installation quickstart to get started using OpenSearch right away.

Access numerous bundled plugins or install additional plugins to customize your OpenSearch platform.

Review the OpenSearch performance benchmarks to view the results of ongoing performance testing.

Most recent OpenSearch blog posts

March 12, 2026 in Case Studies

Achieving 99.99% Availability: Atlassian’s Success with OpenSearch for Mission-Critical Workloads

Atlassian, the company behind Jira, Confluence, and Trello, migrated critical workloads to OpenSearch. As demand for artificial intelligence (AI) and contextual search increased, OpenSearch became a foundation for new innovations…
Read More
March 11, 2026 in Blog

Beyond filter rewrite: How doc value skip indexes accelerate aggregations in OpenSearch

OpenSearch doc value skip indexes deliver up to 28x faster aggregations on uncorrelated filter and aggregation fields, overcoming key filter rewrite limitations.
Read More
March 5, 2026 in Blog

OpenSearch Agent Health: Open-source observability and evaluation for AI agents

Discover why AI agents fail in silence and how OpenSearch Agent Health solves it with open-source trace observability, automated benchmarking, and LLM judge evaluation.
Read More
March 3, 2026 in Blog

Accelerating FP16 vector search performance using bulk SIMD in OpenSearch 3.5

Discover how OpenSearch FP16 vector search performance improved up to 310% throughput using bulk SIMD in OpenSearch 3.5, with progressive optimizations across 3.1, 3.4, and 3.5 reducing p99 latency to…
Read More
February 25, 2026 in Blog

Data Prepper 2.14: OpenTelemetry-powered APM service maps and production-ready Prometheus support

Data Prepper 2.14 adds OpenTelemetry-powered APM service maps, production-ready Prometheus support, Lambda streaming, and cross-region S3 for modern observability.
Read More
February 24, 2026 in Blog

The 2026 OpenSearch Roadmap: Four pillars for AI-native innovation

As OpenSearch continues to evolve, clarity and collaboration matter more than ever. We are transforming how we share the OpenSearch technical roadmap to give you greater visibility into what’s coming…
Read More