Link Search Menu Expand Document Documentation Menu

Vector quantization

By default, OpenSearch supports the indexing and querying of vectors of type float, where each dimension of the vector occupies 4 bytes of memory. For use cases that require ingestion on a large scale, keeping float vectors can be expensive because OpenSearch needs to construct, load, save, and search graphs (for the native faiss and nmslib [deprecated] engines). To reduce the memory footprint, you can use vector quantization.

OpenSearch supports many varieties of quantization. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint consumed by the vector search.

Quantize vectors outside of OpenSearch

Quantize vectors outside of OpenSearch before ingesting them into an OpenSearch index.

Byte vectors

Quantize vectors into byte vectors

Binary vectors

Quantize vectors into binary vector

Quantize vectors within OpenSearch

Use OpenSearch built-in quantization to quantize vectors.

Lucene scalar quantization

Use built-in scalar quantization for the Lucene engine

Faiss 16-bit scalar quantization

Use built-in scalar quantization for the Faiss engine

Faiss product quantization

Use built-in product quantization for the Faiss engine

Binary quantization

Use built-in binary quantization for the Faiss engine

350 characters left

Have a question? .

Want to contribute? or .