The supporting infrastructure for large LLM jobs can be difficult and costly to set up. Storing vector data requires careful consideration of resource consumption. OpenSearch offers a straightforward way to store embeddings generated by tools like Azure OpenAI or the Natural Neural Search plugin. It also handles querying, reducing the operational overhead. This talk shows how to prepare pdf files, send them to Azure’s OpenAI API to generate embeddings, and store the resulting vectors in OpenSearch. This will be running on a low maintenance Raspberry Pi cluster and Charmed OpenSearch.