Semantic search
Semantic search considers the context and intent of a query. In OpenSearch, semantic search is facilitated by text embedding models. Semantic search creates a dense vector (a list of floats) and ingests data into a vector index.
PREREQUISITE
Before using semantic search, you must set up a text embedding model. For more information, see Choosing a model.
Configuring semantic search
There are two ways to configure semantic search:
- Automated workflow (Recommended for quick setup): Automatically create an ingest pipeline and index with minimal configuration.
- Manual setup (Recommended for custom configurations): Manually configure each component for greater flexibility and control.
Automated workflow
OpenSearch provides a workflow template that automatically creates both an ingest pipeline and an index. You must provide the model ID for the configured model when creating a workflow. Review the semantic search workflow template defaults to determine whether you need to update any of the parameters. For example, if the model dimensionality is different from the default (1024
), specify the dimensionality of your model in the output_dimension
parameter. To create the default semantic search workflow, send the following request:
POST /_plugins/_flow_framework/workflow?use_case=semantic_search&provision=true
{
"create_ingest_pipeline.model_id": "mBGzipQB2gmRjlv_dOoB"
}
OpenSearch responds with a workflow ID for the created workflow:
{
"workflow_id" : "U_nMXJUBq_4FYQzMOS4B"
}
To check the workflow status, send the following request:
GET /_plugins/_flow_framework/workflow/U_nMXJUBq_4FYQzMOS4B/_status
Once the workflow completes, the state
changes to COMPLETED
. The workflow creates the following components:
- An ingest pipeline named
nlp-ingest-pipeline
- An index named
my-nlp-index
You can now continue with steps 3 and 4 to ingest documents into the index and search the index.
Manual setup
To manually configure semantic search, follow these steps:
- Create an ingest pipeline.
- Create an index for ingestion.
- Ingest documents into the index.
- Search the index.
Step 1: Create an ingest pipeline
To generate vector embeddings, you need to create an ingest pipeline that contains a text_embedding
processor, which will convert the text in a document field to vector embeddings. The processor’s field_map
determines the input fields from which to generate vector embeddings and the output fields in which to store the embeddings.
The following example request creates an ingest pipeline where the text from passage_text
will be converted into text embeddings and the embeddings will be stored in passage_embedding
:
PUT /_ingest/pipeline/nlp-ingest-pipeline
{
"description": "A text embedding pipeline",
"processors": [
{
"text_embedding": {
"model_id": "bQ1J8ooBpBj3wT4HVUsb",
"field_map": {
"passage_text": "passage_embedding"
}
}
}
]
}
To split long text into passages, use the text_chunking
ingest processor before the text_embedding
processor. For more information, see Text chunking.
Step 2: Create an index for ingestion
In order to use the text embedding processor defined in your pipeline, create a vector index, adding the pipeline created in the previous step as the default pipeline. Ensure that the fields defined in the field_map
are mapped as correct types. Continuing with the example, the passage_embedding
field must be mapped as a k-NN vector with a dimension that matches the model dimension. Similarly, the passage_text
field should be mapped as text
.
The following example request creates a vector index that is set up with a default ingest pipeline:
PUT /my-nlp-index
{
"settings": {
"index.knn": true,
"default_pipeline": "nlp-ingest-pipeline"
},
"mappings": {
"properties": {
"id": {
"type": "text"
},
"passage_embedding": {
"type": "knn_vector",
"dimension": 768,
"method": {
"engine": "lucene",
"space_type": "l2",
"name": "hnsw",
"parameters": {}
}
},
"passage_text": {
"type": "text"
}
}
}
}
For more information about creating a vector index and its supported methods, see Creating a vector index.
Step 3: Ingest documents into the index
To ingest documents into the index created in the previous step, send the following requests:
PUT /my-nlp-index/_doc/1
{
"passage_text": "Hello world",
"id": "s1"
}
PUT /my-nlp-index/_doc/2
{
"passage_text": "Hi planet",
"id": "s2"
}
Before the document is ingested into the index, the ingest pipeline runs the text_embedding
processor on the document, generating text embeddings for the passage_text
field. The indexed document includes the passage_text
field, which contains the original text, and the passage_embedding
field, which contains the vector embeddings.
Step 4: Search the index
To perform vector search on your index, use the neural
query clause either in the Search for a Model API or Query DSL queries. You can refine the results by using a vector search filter.
The following example request uses a Boolean query to combine a filter clause and two query clauses—a neural query and a match
query. The script_score
query assigns custom weights to the query clauses:
GET /my-nlp-index/_search
{
"_source": {
"excludes": [
"passage_embedding"
]
},
"query": {
"bool": {
"filter": {
"wildcard": { "id": "*1" }
},
"should": [
{
"script_score": {
"query": {
"neural": {
"passage_embedding": {
"query_text": "Hi world",
"model_id": "bQ1J8ooBpBj3wT4HVUsb",
"k": 100
}
}
},
"script": {
"source": "_score * 1.5"
}
}
},
{
"script_score": {
"query": {
"match": {
"passage_text": "Hi world"
}
},
"script": {
"source": "_score * 1.7"
}
}
}
]
}
}
}
The response contains the matching document:
{
"took" : 36,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 1.2251667,
"hits" : [
{
"_index" : "my-nlp-index",
"_id" : "1",
"_score" : 1.2251667,
"_source" : {
"passage_text" : "Hello world",
"id" : "s1"
}
}
]
}
}
Setting a default model on an index or field
A neural
query requires a model ID for generating vector embeddings. To eliminate passing the model ID with each neural query request, you can set a default model on a vector index or a field.
First, create a search pipeline with a neural_query_enricher
request processor. To set a default model for an index, provide the model ID in the default_model_id
parameter. To set a default model for a specific field, provide the field name and the corresponding model ID in the neural_field_default_id
map. If you provide both default_model_id
and neural_field_default_id
, neural_field_default_id
takes precedence:
PUT /_search/pipeline/default_model_pipeline
{
"request_processors": [
{
"neural_query_enricher" : {
"default_model_id": "bQ1J8ooBpBj3wT4HVUsb",
"neural_field_default_id": {
"my_field_1": "uZj0qYoBMtvQlfhaYeud",
"my_field_2": "upj0qYoBMtvQlfhaZOuM"
}
}
}
]
}
Then set the default model for your index:
PUT /my-nlp-index/_settings
{
"index.search.default_pipeline" : "default_model_pipeline"
}
You can now omit the model ID when searching:
GET /my-nlp-index/_search
{
"_source": {
"excludes": [
"passage_embedding"
]
},
"query": {
"neural": {
"passage_embedding": {
"query_text": "Hi world",
"k": 100
}
}
}
}
The response contains both documents:
{
"took" : 41,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 1.22762,
"hits" : [
{
"_index" : "my-nlp-index",
"_id" : "2",
"_score" : 1.22762,
"_source" : {
"passage_text" : "Hi planet",
"id" : "s2"
}
},
{
"_index" : "my-nlp-index",
"_id" : "1",
"_score" : 1.2251667,
"_source" : {
"passage_text" : "Hello world",
"id" : "s1"
}
}
]
}
}
Next steps
- Explore our semantic search tutorials to learn how to build AI search applications.