Link Search Menu Expand Document Documentation Menu

You're viewing version 2.16 of the OpenSearch documentation. This version is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

Sycamore

Sycamore is an open-source, AI-powered document processing engine designed to prepare unstructured data for retrieval-augmented generation (RAG) and semantic search using Python. Sycamore supports chunking and enriching a wide range of complex document types, including reports, presentations, transcripts, and manuals. Additionally, Sycamore can extract and process embedded elements, such as tables, figures, graphs, and other infographics. It can then load the data into target indexes, including vector and keyword indexes, using a connector like the OpenSearch connector.

To get started, visit the Sycamore documentation.

Sycamore ETL pipeline structure

A Sycamore extract, transform, load (ETL) pipeline applies a series of transformations to a DocSet, which is a collection of documents and their constituent elements (for example, tables, blocks of text, or headers). At the end of the pipeline, the DocSet is loaded into OpenSearch vector and keyword indexes.

A typical pipeline for preparing unstructured data for vector or hybrid search in OpenSearch consists of the following steps:

  • Read documents into a DocSet.
  • Partition documents into structured JSON elements.
  • Extract metadata and filter and clean data using transforms.
  • Create chunks from groups of elements.
  • Embed the chunks using the model of your choice.
  • Load the embeddings, metadata, and text into OpenSearch vector and keyword indexes.

For an example pipeline that uses this workflow, see this notebook.

Install Sycamore

We recommend installing the Sycamore library using pip. The connector for OpenSearch can be specified and installed using extras. For example:

pip install sycamore-ai[opensearch]

By default, Sycamore works with the Aryn Partitioning Service to process PDFs. To run inference locally for partitioning or embedding, install Sycamore with the local-inference extra as follows:

pip install sycamore-ai[opensearch,local-inference]

Next steps

For more information, visit the Sycamore documentation.

350 characters left

Have a question? .

Want to contribute? or .