Skip to main content
search

OpenSearch Project Blog

Your source for OpenSearch updates, community news, and technical insights.

Happy birthday OpenSearch

Happy fourth birthday, OpenSearch!

July 17, 2025
A look back at the last 48 months of OpenSearch and a look to what the future holds.
  • Reset

Found 359 posts


Performance optimizations for the OpenSearch security layer

August 14, 2025
When discussing performance optimizations in OpenSearch, we usually focus on the processing of actual data: we look at algorithms and data structures to improve the amount of time a query...
Community Technical

Taking your first steps towards search relevance

August 13, 2025
Author Daniel Wrigley Daniel is a Search Consultant at OpenSource Connections. He has worked in search since graduating in computational linguistics studies at Ludwig-Maximilians-University Munich in 2012 where he developed...

Diving deep into distributed microservices with OpenSearch and OpenTelemetry

August 12, 2025
Authors Shenoy Pratik Gurudatt Shenoy Pratik is a software engineer at AWS working on observability for the OpenSearch Project. He maintains multiple plugins, including OpenSearch Observability, Reporting, and Query Workbench....
Abstract graphic icon of the Observability capabilities of OpenSearch
Community

Unpacking OpenSearch 3.0: AI, observability, and governance on the OpenObservability Talks podcast

August 8, 2025
In a recent episode of the OpenObservability Talks podcast, I sat down with Carl Meadows, Chair of the Governing Board of the OpenSearch Software Foundation, and Pallavi Priyadarshini, Technical Steering...
OpenSearch AI
Community Technical

Using OpenSearch for Retrieval-Augmented Generation (RAG)

August 7, 2025
Why Retrieval-Augmented Generation (RAG) matters Large language models (LLMs) have transformed AI with their ability to generate human-like text, but they have notable limitations. Because an LLM’s knowledge comes from...

Transforming bucket aggregations: Our journey to 100x performance improvement

August 5, 2025
Learn how OpenSearch improves the performance of date histogram aggregations by using the BKD index tree, with support for sub-aggregations and multi-range traversal.

The new semantic field: Simplifying semantic search in OpenSearch

July 31, 2025
Semantic search improves result relevance by using a machine learning (ML) model to generate dense or sparse vector embeddings from unstructured text. Traditionally, enabling semantic search has required several manual...
A person delivering a keynote speech

The state of OpenSearch user groups: Building a global community

July 30, 2025
This blog provides an overview of the OpenSearch User Group program and some observations I shared as part of a keynote presentation at this year’s OpenSearchCon Europe. The OpenSearch Project has grown...

Building a multimodal search engine in OpenSearch

July 29, 2025
In today’s data-rich world, information isn’t just text anymore. It’s images, videos, audio, and more. To truly understand and retrieve relevant information, search engines need to go beyond keywords and...
Community

Case Study: Bringing Fulfillment into Focus: How KRUU Uses OpenSearch for Real-Time Monitoring

July 25, 2025
At KRUU, Europe’s leading photo booth rental provider, smooth logistics are critical to customer satisfaction. The company’s signature product, a rentable, shippable photo booth for weddings and special events, is...

Using OpenSearch as a Vector Database

July 25, 2025
Traditional lexical search, based on term frequency models like blend term frequency models (BM25), is widely used and effective for many search applications. However, lexical search techniques require significant investment...

OpenSearch approximation framework

July 24, 2025
The Approximation Framework is a performance optimization in OpenSearch that introduces early termination during BKD tree traversal. It enhances the efficiency of numeric range, sort, and match_all queries by collecting only the required number of documents. This framework delivers exact results with significantly reduced query latency.

Advanced usage of the semantic field in OpenSearch

July 23, 2025
Explore advanced semantic field features in OpenSearch, including text chunking, support for externally hosted and custom models, cross-cluster usage, and updating model references.

Introducing semantic highlighting in OpenSearch

July 17, 2025
Explore OpenSearch 3.0's semantic highlighting, an AI-powered feature that intelligently identifies relevant passages based on meaning rather than exact keyword matches. Learn how to enhance your search applications with this powerful capability.
Happy birthday OpenSearch

Happy fourth birthday, OpenSearch!

July 17, 2025
A look back at the last 48 months of OpenSearch and a look to what the future holds.

Making ingestion smarter: System ingest pipelines in OpenSearch

July 16, 2025
Discover OpenSearch 3.1's system ingest pipeline feature, enabling plugin developers to automate document processing during ingestion. Learn how it simplifies configuration, improves user experience, and enhances data transformation capabilities.

Introduction to ML inference processors in OpenSearch: Review summarization and semantic search

July 15, 2025
Learn how to use ML inference processors to implement semantic search and summarization in OpenSearch.

Reducing hybrid query latency in OpenSearch 3.1 with efficient score collection

July 9, 2025
OpenSearch 3.1 delivers up to 3.8x faster hybrid query throughput by optimizing score collection, batching, and segment-level execution—resulting in lower latency and more predictable performance under load.
Technical

Neural sparse models are now available in Hugging Face Sentence Transformers

July 7, 2025
Discover how to implement neural sparse models in Sentence Transformers with OpenSearch. Learn to encode text into sparse vectors with just a few lines of code for efficient search.

Announcing OpenSearch Data Prepper 2.12: Additional source and sinks for your data ingestion needs

July 1, 2025
OpenSearch Data Prepper 2.12 introduces new features to enhance your data ingestion and processing pipeline. Discover the unified OTLP source for streamlined telemetry data ingestion, the Amazon SQS sink for efficient message queuing, and integration with AWS X-Ray for observability.