Template query
Introduced 2.19
Use a template
query to create search queries that contain placeholder variables. Placeholders are specified using the "${variable_name}"
syntax (note that the variables must be enclosed in quotation marks). When you submit a search request, these placeholders remain unresolved until they are processed by search request processors. This approach is particularly useful when your initial search request contains data that needs to be transformed or generated at runtime.
For example, you might use a template query when working with the ml_inference search request processor, which converts text input into vector embeddings during the search process. The processor will replace the placeholders with the generated values before the final query is executed.
For a complete example, see Query rewriting using template queries.
Example
The following example shows a template k-NN query with a "vector": "${text_embedding}"
placeholder. The placeholder "${text_embedding}"
will be replaced with embeddings generated by the ml_inference
search request processor from the text
input field:
GET /template-knn-index/_search?search_pipeline=my_knn_pipeline
{
"query": {
"template": {
"knn": {
"text_embedding": {
"vector": "${text_embedding}", // Placeholder for the vector field
"k": 2
}
}
}
},
"ext": {
"ml_inference": {
"text": "sneakers" // Input text for the ml_inference processor
}
}
}
To use a template query with a search request processor, you need to configure a search pipeline. The following is an example configuration for the ml_inference
search request processor. The input_map
maps document fields to model inputs. In this example, the ext.ml_inference.text
source field in a document is mapped to the inputText
field—the expected input field for the model. The output_map
maps model outputs to document fields. In this example, the embedding
output field from the model is mapped to the text_embedding
destination field in your document:
PUT /_search/pipeline/my_knn_pipeline
{
"request_processors": [
{
"ml_inference": {
"model_id": "Sz-wFZQBUpPSu0bsJTBG",
"input_map": [
{
"inputText": "ext.ml_inference.text" // Map input text from the request
}
],
"output_map": [
{
"text_embedding": "embedding" // Map output to the placeholder
}
]
}
}
]
}
After the ml_inference
search request processor runs, the search request is rewritten. The vector
field contains the embeddings generated by the processor, and the text_embedding
field contains the processor output:
GET /template-knn-1/_search
{
"query": {
"template": {
"knn": {
"text_embedding": {
"vector": [0.6328125, 0.26953125, ...],
"k": 2
}
}
}
},
"ext": {
"ml_inference": {
"text": "sneakers",
"text_embedding": [0.6328125, 0.26953125, ...]
}
}
}
Limitations
Template queries require at least one search request processor in order to resolve placeholders. Search request processors must be configured to produce the variables expected in the pipeline.
Next steps
- For a complete example, see Template queries.