Skip to main content
search

OpenSearch Core

OpenSearch Core is a high-performance search engine built on the robust indexing and search capabilities of Apache Lucene.

It allows you to ingest large volumes of content in diverse formats using OpenSearch Data Prepper, index complex multidimensional data, and perform efficient searches that return highly accurate results.

You can then explore and visualize your data seamlessly with OpenSearch Dashboards.

A powerful and versatile search and analytics engine

Robust search and indexing based on Apache Lucene

Powerful, open, designed to scale

OpenSearch is a fast, flexible, open-source suite for search and analytics. From log data to real-time monitoring, it delivers enterprise-grade features like security, alerting, and machine learning—without vendor lock-in. Its distributed design scales easily and keeps you in control.

Built on Apache Lucene

 Apache Lucene™ is a high-performance, full-featured search engine library written entirely in Java. It is a technology suitable for nearly any application that requires structured search, full-text search, faceting, nearest-neighbor search across high-dimensionality vectors, spell correction or query suggestions.

OpenSearch Core components

Fault tolerant, scalable components

The OpenSearch Core architecture is made up of clusters, nodes, indexes, shards, and documents. At the top level is the OpenSearch cluster, a distributed network of nodes, each responsible for different cluster operations based on its type. Data nodes are responsible for storing indexes—logical groupings of documents—and handling tasks like data ingestion, search, and aggregation.

Each index is divided into shards, which include both primary and replica data. Shards are distributed across multiple machines, enabling horizontal scaling for improved performance and efficient use of storage.

OpenSearch’s sharding strategy

A diagram of OpenSearch clusters

OpenSearch Clusters are highly scalable and exceptionally resilient.

Scalable by design

 Handle growing data volumes effortlessly with horizontal scaling and distributed architecture.

Open and flexible

 Fully open-source with support for custom plugins, fine-tuned queries, and community-driven innovation.

Real-time insights

 Run lightning-fast indexing, searches, and analytics on streaming or historical data for instant visibility.

Secure and extensible

 Built-in security features, role-based access control, audit logging, and compliance.

Key features

Lexical search

Perform traditional keyword-based queries using the Okapi BM25 algorithm.

Semantic search

Incorporate text embedding models and advanced vector search capabilities for interpreting the meaning and context of queries.

Hybrid search

Combine keyword and semantic search to improve search relevance and accuracy.

Vector engine

Execute k-NN searches and manage vector data alongside other data types with OpenSearch Core’s integrated vector database.

Intuitive dashboards

Render your search data and use natural language instructions to create visualizations with OpenSearch Dashboards.

Dynamic scaling

Supports enterprise workloads using both horizontal and vertical scaling, as well as native vector capabilities that can handle billions of vectors.

Extensive plugins

Add new features and capabilities to OpenSearch core using its expansive library of pre-built plug-ins, with more added on a regular basis.

Benefits

Established and trusted

OpenSearch Core is built on Apache Lucene’s scalable, high-performance search library, continuously updated to meet evolving performance requirements.

Flexible and adaptable

OpenSearch Core is highly customizable, making it adaptable to a diverse range of search applications and AI use cases.

Extensible and open

 With dozens of plugins available, and more supported with new releases, you can easily add new features or build your own.

Getting started

Check out the installation quickstart to get started using OpenSearch right away.

Access numerous bundled plugins or install additional plugins to customize your OpenSearch platform.

Review the OpenSearch performance benchmarks to view the results of ongoing performance testing.

Most recent OpenSearch blog posts

October 28, 2025 in Blog

Batch Processing Semantic Highlighting in OpenSearch 3.3

OpenSearch 3.3 introduces batch processing for remote semantic highlighting models, reducing ML inference calls from N to 1 per search query. Our benchmarks demonstrate 2-14× performance improvements depending on document…
Read More
October 23, 2025 in Blog

Scaling neural sparse search to billions of vectors with approximate search

Neural sparse search combines the semantic understanding of neural models with the efficiency of sparse vector representations. This technique has proven effective for semantic retrieval while maintaining the advantages of…
Read More
October 22, 2025 in Blog

Save up to 2x on storage with derived source

Storage is a key factor driving the infrastructure cost of your OpenSearch cluster. As your data grows, storage requirements can increase multifold, depending on whether OpenSearch stores documents in multiple…
Read More
October 17, 2025 in Blog

Technical Steering Committee: Reflecting on our first year

As we mark the first anniversary of the OpenSearch Software Foundation's Technical Steering Committee (TSC), it's important to celebrate the milestones we've achieved and acknowledge the dedicated individuals who helped…
Read More
October 16, 2025 in News

OpenSearch 3.3’s New AI Agents Now Generally Available for Developers

Read More
October 15, 2025 in News

OpenSearch 3.3 delivers an all-in-one observability experience for search

Read More