As OpenSearch continues to evolve, clarity and collaboration matter more than ever. We are transforming how we share the OpenSearch technical roadmap to give you greater visibility into what’s coming next. This new approach makes it easier for you to provide feedback, prioritize features, and collaborate on development, ensuring that contributor efforts align with community needs. The project has introduced a theme-based, community-driven roadmap built on four pillars of innovation:

  1. Search modernization
  2. Observability and analytics
  3. Scalability and resiliency
  4. Community and platform 

These pillars create a clear development framework that continues to position OpenSearch as the preferred open source solution for search, analytics, and observability, now and in the future. 

This post explores each pillar, starting with a high-level overview and then diving into the technical details. Throughout this blog post, you’ll find links to relevant GitHub RFCs and issues where you can explore specific innovations, provide feedback, and contribute to development.

2026 at a glance

OpenSearch is evolving into a cohesive, AI-native platform built to power AI systems for organizations of all sizes. The project is closing the query complexity gap by simplifying advanced techniques, such as hybrid search, vector search, semantic search, cross-encoders, and reranking, into composable query pipelines with a Python-native developer experience. OpenSearch provides sophisticated capabilities out of the box, including query understanding that captures business context, native conditional logic, and interleaved A/B testing.

Core engine modernization supports high-throughput demands of AI agents with 2x throughput improvements, streaming aggregations, and gRPC APIs. Native agentic support through Model Context Protocol (MCP) integration and AgentHealth evaluation tools helps AI engineers build RAG and agentic solutions at scale.

Search modernization delivers “just works” functionality with built-in query understanding and evaluation through interleaved A/B testing, while elevating vector and search capabilities to meet the demands of AI-native workloads as OpenSearch evolves to support AI agents as a new class of search users.

Observability and analytics builds a unified platform that integrates logs, metrics, and traces. With Discover as the central hub for analytics, OpenSearch streamlines workflows so users can move from querying to visualization and alerting without context switching. Key efforts include deepening OpenTelemetry (OTLP) support, enhancing PPL and SQL interfaces, and integrating OpenSearch’s observability toolkit with Prometheus to enable unified observability.

Scalability and resiliency delivers seamless scaling that eliminates complete cluster redesigns at every growth stage. This includes advancing remote-backed storage and automatic storage tiering to balance cost and performance, along with implementing query resiliency features such as search backpressure to prevent runaway queries from impacting cluster stability. The vision is a cloud-native architecture with automated, zero-configuration scaling that delivers operational simplicity and cost-effective growth to petabyte scale.

Community and platform addresses the health and growth of the OpenSearch Project. The project is committed to growing maintainership by defining a clear advancement path for contributors and increasing active participation. This includes maintaining a rapid, automated release process and providing transparent metrics to safeguard feature integrity. The project supports new user communities by providing agentic search systems, making it easier to integrate with Python-native environments and simplifying how contributors propose and categorize ideas for the public roadmap.

Four pillars: Deep dive

The following sections explain how each pillar translates into practical capabilities and innovations that you can adopt, evaluate, and help shape as part of the OpenSearch community.

Pillar 1: Search modernization 

AI-native search platform serves as the unified backbone for enterprise AI systems. Rather than merely returning data for simple queries, OpenSearch supports conversational agents, security automation, and knowledge discovery at scale. The project’s planned innovations deliver an experience that moves away from complex legacy configurations and verbose JSON to appeal directly to AI engineers and product managers. By integrating these advanced capabilities, OpenSearch handles the full lifecycle of AI search, from development to production.

AI-native developer experience empowers data scientists and engineers who are new to information retrieval by providing a Python-native environment that mirrors the ease of use found in modern machine learning tools. SearchBuilder enables Python-native AI engineers to immediately prototype a full OpenSearch index with sophisticated query structures through a simple chat-based UI on their laptop. The local configuration can then be deployed directly to the cloud. Composable query pipelines allow users to stack preprocessing, hybrid search, and reranking components without extensive manual coding. Native support for experimentation, including interleaved A/B testing, ensures that developers can refine models using built-in feedback loops.  

Sophisticated query capabilities provide immediate advanced capabilities to power retrieval-augmented generation (RAG) and agentic solutions. OpenSearch closes the query complexity gap by providing simplified out-of-the-box support for hybrid search, cross-encoders, and advanced ranking functions that meets the needs of most teams. Our roadmap includes enhanced query understanding that goes beyond keyword matching to capture true business context, such as distinguishing between a user who is browsing casually and one ready to make a purchase. The project is experimenting with implementing native conditional logic within query pipelines to handle complex search workflows internally.

Core engine modernization enables AI agents as the primary data retrieval interface, targeting a 2x throughput improvement by the end of 2026 through three capabilities:

  1. Streaming aggregations deliver progressive results in milliseconds, improving perceived latency by 3x, and reducing resource utilization by 2x. 
  2. Agent-native gRPC APIs support interruptible, multi-turn conversations, achieving up to 2x performance gains by replacing JSON serialization with Protobuf. 
  3. Multi-engine, multi-format query execution routes queries across search indexes and open formats such as Parquet and Iceberg. 

These capabilities are supported by vectorized execution, streaming operators, pluggable execution engines, and zero-copy data access.

Native support for agentic systems provides a built-in agentic context engineering stack, including memory containers, tool routing, and MCP to manage hundreds of tools without context window limitations or hallucinations. AgentHealth evaluates agent success in solving problems. These capabilities enable Agentic Relevance Tuning (ART), an agentic product that solves complex relevance challenges safely and efficiently, demonstrating a multi-agent system for improving search relevance.

Scale for vector search addresses price-performance optimization critical for running AI/ML applications at scale. In 2026, OpenSearch focuses on three key areas: performance, cost reduction, and relevance. 

Performance improvements include hardware acceleration, bulk vector scoring, vector reordering to reduce disk IOPS, Better Binary Quantization (BBQ), improved graph quality, and smart routing that enables semantic-based data organization across nodes for up to 3x throughput gains. 

Cost reduction is achieved through advanced quantization techniques exceeding 32x compression ratios, elimination of duplicated flat vector storage, and optimizing graph structure through vector node clumping and clustering. 

Relevance enhancements include core algorithms using full-precision vectors for graph building, a search planner for vector indexes, and model-based vector similarity support.

Pillar 2: Observability and analytics

Native Time-Series Database (TSDB) support addresses the historical challenges of storing large-scale metrics efficiently. The project is introducing a native storage engine designed specifically for time-series data. This new engine moves beyond the limitations of standard indexing to provide high-performance, cost-effective storage for telemetry. By optimizing the way metrics are handled internally, OpenSearch can serve as the high-scale backbone for unified observability without the performance regressions typically seen in multi-purpose data stores.

Streamlined ingestion with Data Prepper simplified how data enters the new TSDB engine. The project is significantly enhancing Data Prepper by adding support for the Prometheus Remote Write v1 protocol, allowing users to integrate OpenSearch into existing monitoring pipelines with zero friction. Additionally, Data Prepper will natively convert OpenTelemetry (OTel) metrics into TSDB format, automatically optimizing data as users adopt OTLP standards. The project is also exploring expanded PromQL support to non-Prometheus sources such as TSDB, enabling users to apply their existing Prometheus expertise across OpenSearch.

Unified analytics and developer experience continues the project’s investment in the Piped Processing Language (PPL) to provide a more intuitive, flow-based way to query across disparate data types. To help teams get started faster, the project has introduced a slimmed-down observability stack that launches with a single command-line command or Docker Compose download. These initiatives remove complexity, allowing engineers to move from installation to actionable insights in minutes rather than days. The project is also integrating large-scale industry use cases such as Uber’s metrics integration to ensure the platform remains robust enough for the world’s most demanding observability workloads.

Pillar 3: Scalability and resiliency

Seamless, intelligent scaling eliminates scaling pain points that traditionally require complete cluster redesigns at every growth stage. The project is building a consistent desktop-to-cloud architecture, ensuring that a developer’s laptop remains identical when scaled to the largest cloud-native deployed clusters. To reduce operational overhead, the project is replacing manual scripting with intelligent automation for index tiering and snapshot management, alongside resource-aware optimizers that skip or use specific components based on real-time cluster load.  

Native pull-based indexing addresses the inherent limitations of traditional push-based models, particularly during unpredictable traffic surges. The project is advancing its pull-based ingestion framework, originally developed and contributed by Uber. This framework introduces native backpressure handling that safeguards cluster stability by allowing the engine to pull data from external event streams at its own pace rather than being overwhelmed by incoming push requests. This architectural shift prevents server overload during spikes and removes the need for the translog on indexing nodes, significantly reducing resource overhead and improving overall efficiency.

Query Insights has evolved from a monitoring tool into a query performance management platform. A rule-based recommendation engine automatically detects query anti-patterns and generates actionable fix suggestions with confidence scores, displayed directly in the Query Insights dashboard. A new query profiler UI provides deep execution-phase inspection and historical correlation with similar slow queries. The top N query page includes rich visualizations such as P90/P99 metrics, breakdown charts by node/index/user/WLM group, and performance timelines. Query event monitoring integrates with the Alerting plugin to deliver proactive notifications for performance regressions, resource spikes, and new queries added to the top N list. For multi-tenant deployments, RBAC-based filtering scopes query insights visibility by username or backend role. A cache layer for recently finished queries ensures that short-lived queries are captured; a remote repository exporter enables exporting top N data to S3, Azure, and GCS for cross-cluster analysis and long-term retention.

Workload management includes group-level search settings that let administrators define per-workload-group overrides for timeout, cancellation interval, the maximum number of concurrent shard requests, and aggregation bucket limits. These settings are applied automatically to all requests assigned to a group, enabling true multi-tenant governance without requiring clients to pass parameters in every query. The initial implementation delivers the foundational infrastructure, starting with phase_took as the first setting, with additional settings to follow.

Advanced vector and ML capabilities continue the project’s investment in better vector support, including AMD GPU support to enable on-premises deployments. This extends to deeper inferencing capabilities, such as simplifying how to run a qwen3 model on its own inference service. The project aims to remove bottlenecks of running an ingest pipeline with a remote ML model so users can process many more documents at a time. The roadmap includes exploring new techniques such as supporting GGUF models and last token pooling.

Pillar 4: Community and platform

Refining the contributor path develops a more clearly defined ladder for transitioning from an OpenSearch contributor to an OpenSearch maintainer. This initiative is designed to remove the ambiguity from the promotion process and, crucially, to increase organizational diversity within our maintainership. By providing a transparent roadmap for leadership, the project ensures that its future is shaped by a broad, representative group of experts from across the global community.

Evolving technical governance scales as the project grows. The project is scaling the Technical Steering Committee (TSC) and improving the overall technical governance to handle the increasing contribution velocity. Following the success of the Build Technical Advisory Group (TAG), the project has officially chartered three new TAGs: Observability, Search, and Security. These groups will provide specialized oversight and ensure that innovation in these critical areas remains cohesive and community driven.

Empowering community advocates through the OpenSearch Software Foundation’s Ambassador Program recognizes community leaders who actively contribute to growing the OpenSearch platform. Ambassadors help others learn, create, and collaborate by sharing expertise, supporting community members, and representing OpenSearch around the world. These are trusted people who share their knowledge, organize and participate in user groups and conferences, and bring community feedback back into the project and the foundation. 

Growing the OpenSearch Software Foundation ensures long-term independence and stability. Under the leadership of the Executive Director, Bianca Lewis, the foundation builds programs and initiatives that support both individual contributors and organizations, creating pathways for deeper engagement and sustainability.

Investing in professional growth supports the users’ career development through a formalized Certification Program. This program will allow individuals to validate and demonstrate their OpenSearch expertise, providing a recognized standard for search and analytics proficiency. By professionalizing these skills, the project helps individuals advance their careers while also creating a robust pool of certified talent on which organizations can rely to manage their AI-native search infrastructure.

How to get involved

The strength of OpenSearch has always been its community. From individual contributors submitting their first pull request to enterprise partners building production-grade integrations, the project continues to thrive because of the people who believe in the power of open source. 

Whether you’re a seasoned contributor or just discovering OpenSearch, there’s a place for you in this community. Here are some ways to get involved:

  • Contribute code or documentation: Check out our GitHub repositories and contribution guidelines to start making an impact.
  • Join the conversation: Participate in our community forum, Slack workspace, and mailing lists to ask questions, share ideas, and connect with other users.
  • Attend, present, or lead a user group: Bring the OpenSearch community to your city or join an existing group to learn, share your knowledge, and network.
  • Help triage issues: Join a virtual meeting to work with maintainers on triaging open issues. 
  • Develop content: Write a blog post about OpenSearch. Get started by reviewing the blogging guidelines. 
  • Subscribe to the newsletter: Learn more about what is happening with the project right in your inbox.
  • Provide feedback: Your input directly shapes our roadmap. Share your use cases, feature requests, and suggestions in GitHub issues or our community channels.

OpenSearch is built by and for its community. The project is grateful for every contribution, big or small, and looks forward to building the future of search and analytics together in 2026 and beyond.

Upcoming OpenSearch conferences

 

###

Authors