The buzz at OpenSearchCon India was electric. And for good reason. In a packed keynote session, leaders from AWS and Freshworks unpacked how search is evolving in the age of generative and agentic AI, and how both communities are building next-gen experiences on a vector-first foundation.
OpenSearch in the Era of Agentic AI
Search is no longer just about keywords. With the rise of generative AI and large language models, organizations now need search systems that can understand context, intent, and meaning. OpenSearch has been actively innovating to meet this challenge head-on, with semantic search capabilities, hybrid search, multimodal data support, and even conversational, agentic workflows.
At the heart of this transformation is the OpenSearch vector engine. It empowers developers to index and retrieve rich, unstructured data, from documents and images to embeddings generated by transformer models…making it easier than ever to build AI-native search applications.
Pallavi Priyadarshini, who heads engineering for search at the OpenSearch Project and is a member of the Technical Steering Committee explained, “We’re really pushing the boundaries of search. Semantic, hybrid, multimodal, and now agentic AI, all made possible by the OpenSearch vector engine and the power of the open source community.”
The release of OpenSearch 3.0 marked a major leap forward in this journey, featuring 9.5x performance improvement over OpenSearch 1.3, GPU acceleration for vector engine, native support for Model Context Protocol (MCP) for agentic workflows, improved hybrid search and indexing speed, Lucene 10 related improvements, new observability capabilities and many features towards a Cloud native architecture. All of it is purpose-built to support the modern AI stack, with performance and flexibility at scale.

Pallavi Priyadarshini presents keynote address at OpenSearchCon India 2025
A Real-World Example: Freshworks
Freshworks, a global SaaS leader with over 75,000 customers across 120+ countries, has embedded OpenSearch at the center of its AI transformation. As the company expands its suite of Customer Experience (CX) and Employee Experience (EX) products, it’s leveraging OpenSearch for everything from search and observability to generative AI and conversational experiences.
Handling petabytes of data and millions of queries monthly, Freshworks needed a solution that could scale across geographies, workloads and user expectations. OpenSearch fits the bill. Not just for keyword search, but also for advanced hybrid and vector-based retrieval.
According to Sreedar Gade, VP of Engineering and Responsible AI at Freshworks, “We evaluated five different vector databases. OpenSearch stood out, not just in terms of performance and cost, but because it was already the foundation of our search stack. With OpenSearch, we can deliver unified, hybrid search experiences across text, chat, and multimodal data, all with sub-millisecond latency.”
From AI copilots that assist human agents, to fully autonomous chat-based assistants and federated search across customer-owned data sources, Freshworks is turning OpenSearch into a core engine for intelligent interaction.
Innovation That’s Built to Scale
What makes this partnership so compelling isn’t just the tech, it’s the shared vision. Both OpenSearch and Freshworks are investing in performance, openness, and long-term architectural scalability. Whether it’s extending search across federated data lakes or scaling up multi-agent workflows, both teams are setting the foundation for the next decade of intelligent applications.
Thanks to OpenSearch’s AI/ML connector framework, vector-native infrastructure, and growing ecosystem, implementing and scaling these capabilities is more accessible than ever.
Want to dive deeper into the future of search?
👉 Click here to view the full keynote from OpenSearchCon India