Link Search Menu Expand Document Documentation Menu

This is an earlier version of the OpenSearch documentation. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

Monitors

Table of contents


Key terms

Term Definition
Monitor A job that runs on a defined schedule and queries OpenSearch indices. The results of these queries are then used as input for one or more triggers.
Trigger Conditions that, if met, generate alerts.
Alert An event associated with a trigger. When an alert is created, the trigger performs actions, which can include sending a notification.
Action The information that you want the monitor to send out after being triggered. Actions have a destination, a message subject, and a message body.
Destination A reusable location for an action. Supported locations are Amazon Chime, Email, Slack, or custom webhook.

Create destinations

  1. Choose Alerting, Destinations, Add destination.
  2. Specify a name for the destination so that you can identify it later.
  3. For Type, choose Slack, Amazon Chime, custom webhook, or email.

For Email, refer to the Email as a destination section below. For all other types, specify the webhook URL. See the documentation for Slack and Amazon Chime to learn more about webhooks.

If you’re using custom webhooks, you must specify more information: parameters and headers. For example, if your endpoint requires basic authentication, you might need to add a header with a key of Authorization and a value of Basic <Base64-encoded-credential-string>. You might also need to change Content-Type to whatever your webhook requires. Popular values are application/json, application/xml, and text/plain.

This information is stored in plain text in the OpenSearch cluster. We will improve this design in the future, but for now, the encoded credentials (which are neither encrypted nor hashed) might be visible to other OpenSearch users.

Email as a destination

To send or receive an alert notification as an email, choose Email as the destination type. Next, add at least one sender and recipient. We recommend adding email groups if you want to notify more than a few people of an alert. You can configure senders and recipients using Manage senders and Manage email groups.

Manage senders

Senders are email accounts from which the alerting plugin sends notifications.

To configure a sender email, do the following:

  1. After you choose Email as the destination type, choose Manage senders.
  2. Choose Add sender, New sender and enter a unique name.
  3. Enter the email address, SMTP host (e.g. smtp.gmail.com for a Gmail account), and the port.
  4. Choose an encryption method, or use the default value of None. However, most email providers require SSL or TLS, which require a username and password in OpenSearch keystore. Refer to Authenticate sender account to learn more.
  5. Choose Save to save the configuration and create the sender. You can create a sender even before you add your credentials to the OpenSearch keystore. However, you must authenticate each sender account before you use the destination to send your alert.

You can reuse senders across many different destinations, but each destination only supports one sender.

Manage email groups or recipients

Use email groups to create and manage reusable lists of email addresses. For example, one alert might email the DevOps team, whereas another might email the executive team and the engineering team.

You can enter individual email addresses or an email group in the Recipients field.

  1. After you choose Email as the destination type, choose Manage email groups. Then choose Add email group, New email group.
  2. Enter a unique name.
  3. For recipient emails, enter any number of email addresses.
  4. Choose Save.

Authenticate sender account

If your email provider requires SSL or TLS, you must authenticate each sender account before you can send an email. Enter these credentials in the OpenSearch keystore using the CLI. Run the following commands (in your OpenSearch directory) to enter your username and password. The <sender_name> is the name you entered for Sender earlier.

./bin/opensearch-keystore add plugins.alerting.destination.email.<sender_name>.username
./bin/opensearch-keystore add plugins.alerting.destination.email.<sender_name>.password

Note: Keystore settings are node-specific. You must run these commands on each node.

To change or update your credentials (after you’ve added them to the keystore on every node), call the reload API to automatically update those credentials without restarting OpenSearch:

POST _nodes/reload_secure_settings
{
  "secure_settings_password": "1234"
}

Create monitors

  1. Choose Alerting, Monitors, Create monitor.
  2. Specify a name for the monitor.
  3. Choose either Per query monitor or Per bucket monitor.

Whereas query-level monitors run your specified query and then check whether the query’s results triggers any alerts, bucket-level monitors let you select fields to create buckets and categorize your results into those buckets. The alerting plugin runs each bucket’s unique results against a script you define later, so you have finer control over which results should trigger alerts. Each of those buckets can trigger an alert, but query-level monitors can only trigger one alert at a time.

  1. Define the monitor in one of three ways: visually, using a query, or using an anomaly detector.

    • Visual definition works well for monitors that you can define as “some value is above or below some threshold for some amount of time.”

    • Query definition gives you flexibility in terms of what you query for (using OpenSearch query DSL) and how you evaluate the results of that query (Painless scripting).

      This example averages the cpu_usage field:

      {
        "size": 0,
        "query": {
          "match_all": {}
        },
        "aggs": {
          "avg_cpu": {
            "avg": {
              "field": "cpu_usage"
            }
          }
        }
      }
      

      You can even filter query results using {{period_start}} and {{period_end}}:

      {
        "size": 0,
        "query": {
          "bool": {
            "filter": [{
              "range": {
                "timestamp": {
                  "from": "{{period_end}}||-1h",
                  "to": "{{period_end}}",
                  "include_lower": true,
                  "include_upper": true,
                  "format": "epoch_millis",
                  "boost": 1
                }
              }
            }],
            "adjust_pure_negative": true,
            "boost": 1
          }
        },
        "aggregations": {}
      }
      

    “Start” and “end” refer to the interval at which the monitor runs. See Available variables.

    To define a monitor visually, choose Visual editor. Then choose a source index, a timeframe, an aggregation (for example, count() or average()), a data filter if you want to monitor a subset of your source index, and a group-by field if you want to include an aggregation field in your query. At least one group-by field is required if you’re defining a bucket-level monitor. Visual definition works well for most monitors.

    If you use the Security plugin, you can only choose indexes that you have permission to access. For details, see Alerting security.

    To use a query, choose Extraction query editor, add your query (using OpenSearch query DSL), and test it using the Run button.

    The monitor makes this query to OpenSearch as often as the schedule dictates; check the Query Performance section and make sure you’re comfortable with the performance implications.

    To use an anomaly detector, choose Anomaly detector and select your Detector.

    The anomaly detection option is for pairing with the anomaly detection plugin. See Anomaly Detection. For anomaly detector, choose an appropriate schedule for the monitor based on the detector interval. Otherwise, the alerting monitor might miss reading the results.

    For example, assume you set the monitor interval and the detector interval as 5 minutes, and you start the detector at 12:00. If an anomaly is detected at 12:05, it might be available at 12:06 because of the delay between writing the anomaly and it being available for queries. The monitor reads the anomaly results between 12:00 and 12:05, so it does not get the anomaly results available at 12:06.

    To avoid this issue, make sure the alerting monitor is at least twice the detector interval. When you create a monitor using OpenSearch Dashboards, the anomaly detector plugin generates a default monitor schedule that’s twice the detector interval.

    Whenever you update a detector’s interval, make sure to update the associated monitor interval as well, as the anomaly detection plugin does not do this automatically.

    Note: Anomaly detection is available only if you are defining a per query monitor.

  2. Choose a frequency and timezone for your monitor. Note that you can only pick a timezone if you choose Daily, Weekly, Monthly, or custom cron expression for frequency.

  3. Add a trigger to your monitor.


Create triggers

Steps to create a trigger differ depending on whether you chose Visual editor, Extraction query editor, or Anomaly detector when you created the monitor.

You begin by specifying a name and severity level for the trigger. Severity levels help you manage alerts. A trigger with a high severity level (e.g. 1) might page a specific individual, whereas a trigger with a low severity level might message a chat room.

Remember that query-level monitors run your trigger’s script just once against the query’s results, but bucket-level monitors execute your trigger’s script on each bucket, so you should create a trigger that best fits the monitor you chose. If you want to execute multiple scripts, you must create multiple triggers.

Visual editor

For a query-level monitor’s Trigger condition, specify a threshold for the aggregation and timeframe you chose earlier, such as “is below 1,000” or “is exactly 10.”

The line moves up and down as you increase and decrease the threshold. Once this line is crossed, the trigger evaluates to true.

Bucket-level monitors also require you to specify a threshold and value for your aggregation and timeframe, but you can use a maximum of five conditions to better refine your trigger. Optionally, you can also use a keyword filter to filter for a specific field in your index.

Extraction query

If you’re using a query-level monitor, specify a Painless script that returns true or false. Painless is the default OpenSearch scripting language and has a syntax similar to Groovy.

Trigger condition scripts revolve around the ctx.results[0] variable, which corresponds to the extraction query response. For example, your script might reference ctx.results[0].hits.total.value or ctx.results[0].hits.hits[i]._source.error_code.

A return value of true means the trigger condition has been met, and the trigger should execute its actions. Test your script using the Run button.

The Info link next to Trigger condition contains a useful summary of the variables and results available to your query.

Bucket-level monitors require you to specify more information in your trigger condition. At a minimum, you must have the following fields:

  • buckets_path, which maps variable names to metrics to use in your script.
  • parent_bucket_path, which is a path to a multi-bucket aggregation. The path can include single-bucket aggregations, but the last aggregation must be multi-bucket. For example, if you have a pipeline such as agg1>agg2>agg3, agg1 and agg2 are single-bucket aggregations, but agg3 must be a multi-bucket aggregation.
  • script, which is the script that OpenSearch runs to evaluate whether to trigger any alerts.

For example, you might have a script that looks like the following:

{
  "buckets_path": {
    "count_var": "_count"
  },
  "parent_bucket_path": "composite_agg",
  "script": {
    "source": "params.count_var > 5"
  }
}

After mapping the count_var variable to the _count metric, you can use count_var in your script and reference _count data. Finally, composite_agg is a path to a multi-bucket aggregation.

Anomaly detector

For Trigger type, choose Anomaly detector grade and confidence.

Specify the Anomaly grade condition for the aggregation and timeframe you chose earlier, “IS ABOVE 0.7” or “IS EXACTLY 0.5.” The anomaly grade is a number between 0 and 1 that indicates the level of severity of how anomalous a data point is.

Specify the Anomaly confidence condition for the aggregation and timeframe you chose earlier, “IS ABOVE 0.7” or “IS EXACTLY 0.5.” The anomaly confidence is an estimate of the probability that the reported anomaly grade matches the expected anomaly grade.

The line moves up and down as you increase and decrease the threshold. Once this line is crossed, the trigger evaluates to true.

Sample scripts

// Evaluates to true if the query returned any documents
ctx.results[0].hits.total.value > 0
// Returns true if the avg_cpu aggregation exceeds 90
if (ctx.results[0].aggregations.avg_cpu.value > 90) {
  return true;
}
// Performs some crude custom scoring and returns true if that score exceeds a certain value
int score = 0;
for (int i = 0; i < ctx.results[0].hits.hits.length; i++) {
  // Weighs 500 errors 10 times as heavily as 503 errors
  if (ctx.results[0].hits.hits[i]._source.http_status_code == "500") {
    score += 10;
  } else if (ctx.results[0].hits.hits[i]._source.http_status_code == "503") {
    score += 1;
  }
}
if (score > 99) {
  return true;
} else {
  return false;
}

Below are some variables you can include in your message using Mustache templates to see more information about your monitors.

Available variables

Monitor variables

Variable Data Type Description
ctx.monitor Object Includes ctx.monitor.name, ctx.monitor.type, ctx.monitor.enabled, ctx.monitor.enabled_time, ctx.monitor.schedule, ctx.monitor.inputs, triggers and ctx.monitor.last_update_time.
ctx.monitor.user Object Includes information about the user who created the monitor. Includes ctx.monitor.user.backend_roles and ctx.monitor.user.roles, which are arrays that contain the backend roles and roles assigned to the user. See alerting security for more information.
ctx.monitor.enabled Boolean Whether the monitor is enabled.
ctx.monitor.enabled_time Milliseconds Unix epoch time of when the monitor was last enabled.
ctx.monitor.schedule Object Contains a schedule of how often or when the monitor should run.
ctx.monitor.schedule.period.interval Integer The interval at which the monitor runs.
ctx.monitor.schedule.period.unit String The interval’s unit of time.
ctx.monitor.inputs Array An array that contains the indices and definition used to create the monitor.
ctx.monitor.inputs.search.indices Array An array that contains the indices the monitor observes.
ctx.monitor.inputs.search.query N/A The definition used to define the monitor.

Trigger variables

Variable Data Type Description
ctx.trigger.id String The trigger’s ID.
ctx.trigger.name String The trigger’s name.
ctx.trigger.severity String The trigger’s severity.
ctx.trigger.condition Object Contains the Painless script used when creating the monitor.
ctx.trigger.condition.script.source String The language used to define the script. Must be painless.
ctx.trigger.condition.script.lang String The script used to define the trigger.
ctx.trigger.actions Array An array with one element that contains information about the action the monitor needs to trigger.

Action variables

Variable Data Type Description
ctx.trigger.actions.id String The action’s ID.
ctx.trigger.actions.name String The action’s name.
ctx.trigger.actions.destination_id String The alert destination’s ID.
ctx.trigger.actions.message_template.source String The message to send in the alert.
ctx.trigger.actions.message_template.lang String The scripting language used to define the message. Must be Mustache.
ctx.trigger.actions.throttle_enabled Boolean Whether throttling is enabled for this trigger. See adding actions for more information about throttling.
ctx.trigger.actions.subject_template.source String The message’s subject in the alert.
ctx.trigger.actions.subject_template.lang String The scripting language used to define the subject. Must be mustache.

Other variables

Variable Data Type Description
ctx.results Array An array with one element (i.e. ctx.results[0]). Contains the query results. This variable is empty if the trigger was unable to retrieve results. See ctx.error.
ctx.last_update_time Milliseconds Unix epoch time of when the monitor was last updated.
ctx.periodStart String Unix timestamp for the beginning of the period during which the alert triggered. For example, if a monitor runs every ten minutes, a period might begin at 10:40 and end at 10:50.
ctx.periodEnd String The end of the period during which the alert triggered.
ctx.error String The error message if the trigger was unable to retrieve results or unable to evaluate the trigger, typically due to a compile error or null pointer exception. Null otherwise.
ctx.alert Object The current, active alert (if it exists). Includes ctx.alert.id, ctx.alert.version, and ctx.alert.isAcknowledged. Null if no alert is active. Only available with query-level monitors.
ctx.dedupedAlerts Object Alerts that have already been triggered. OpenSearch keeps the existing alert to prevent the plugin from creating endless amounts of the same alerts. Only available with bucket-level monitors.
ctx.newAlerts Object Newly created alerts. Only available with bucket-level monitors.
ctx.completedAlerts Object Alerts that are no longer ongoing. Only available with bucket-level monitors.
bucket_keys String Comma-separated list of the monitor’s bucket key values. Available only for ctx.dedupedAlerts, ctx.newAlerts, and ctx.completedAlerts. Accessed through ctx.dedupedAlerts[0].bucket_keys.
parent_bucket_path String The parent bucket path of the bucket that triggered the alert. Accessed through ctx.dedupedAlerts[0].parent_bucket_path.

Add actions

The final step in creating a monitor is to add one or more actions. Actions send notifications when trigger conditions are met and support Slack, Amazon Chime, and webhooks.

If you don’t want to receive notifications for alerts, you don’t have to add actions to your triggers. Instead, you can periodically check OpenSearch Dashboards.

  1. Specify a name for the action.
  2. Choose a destination.
  3. Add a subject and body for the message.

    You can add variables to your messages using Mustache templates. You have access to ctx.action.name, the name of the current action, as well as all trigger variables.

    If your destination is a custom webhook that expects a particular data format, you might need to include JSON (or even XML) directly in the message body:

    { "text": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue. - Trigger: {{ctx.trigger.name}} - Severity: {{ctx.trigger.severity}} - Period start: {{ctx.periodStart}} - Period end: {{ctx.periodEnd}}" }
    

    In this case, the message content must conform to the Content-Type header in the custom webhook.

  4. If you’re using a bucket-level monitor, you can choose whether the monitor should perform an action for each execution or for each alert.

  5. (Optional) Use action throttling to limit the number of notifications you receive within a given span of time.

    For example, if a monitor checks a trigger condition every minute, you could receive one notification per minute. If you set action throttling to 60 minutes, you receive no more than one notification per hour, even if the trigger condition is met dozens of times in that hour.

  6. Choose Create.

After an action sends a message, the content of that message has left the purview of the security plugin. Securing access to the message (e.g. access to the Slack channel) is your responsibility.

Sample message

Monitor {{ctx.monitor.name}} just entered an alert state. Please investigate the issue.
- Trigger: {{ctx.trigger.name}}
- Severity: {{ctx.trigger.severity}}
- Period start: {{ctx.periodStart}}
- Period end: {{ctx.periodEnd}}

If you want to use the ctx.results variable in a message, use {{ctx.results.0}} rather than {{ctx.results[0]}}. This difference is due to how Mustache handles bracket notation.


Work with alerts

Alerts persist until you resolve the root cause and have the following states:

State Description
Active The alert is ongoing and unacknowledged. Alerts remain in this state until you acknowledge them, delete the trigger associated with the alert, or delete the monitor entirely.
Acknowledged Someone has acknowledged the alert, but not fixed the root cause.
Completed The alert is no longer ongoing. Alerts enter this state after the corresponding trigger evaluates to false.
Error An error occurred while executing the trigger—usually the result of a a bad trigger or destination.
Deleted Someone deleted the monitor or trigger associated with this alert while the alert was ongoing.

Create cluster metrics monitor

In addition to monitoring conditions for indexes, the alerting plugin allows monitoring conditions on clusters. Alerts can be set by cluster metrics to watch for when:

  • The health of your cluster reaches a status of yellow or red.
  • Cluster-level metrics, such as CPU usage and JVM memory usage, reach specified thresholds.
  • Node-level metrics, such as available disk space, JVM memory usage, and CPU usage, reach a specified threshold.
  • The total number of documents stores reaches a specified amount.

To create a cluster metrics monitor:

  1. Select Alerting > Monitors > Create monitor.
  2. Select the Per cluster metrics monitor option.
  3. In the Query section, pick the Request type from the dropdown.
  4. (Optional) If you want to filter the API response to use only certain path parameters, enter those parameters under Query parameters. Most APIs that can be used to monitor cluster status support path parameters as described in their documentation (e.g., comma-separated lists of index names).
  5. In the Triggers section, indicate what conditions trigger an alert. The trigger condition autopopulates a painless ctx variable. For example, a cluster monitor watching for Cluster Stats uses the trigger condition ctx.results[0].indices.count <= 0, which triggers an alert based on the number of indexes returned by the query. For more specificity, add any additional painless conditions supported by the API. To see an example of the condition response, select Preview condition response.
  6. In the Actions section, indicate how you want your users to be notified when a trigger condition is met.
  7. Select Create. Your new monitor appears in the Monitors list.

Supported APIs

Trigger conditions use responses from the following cat API endpoints. Most APIs that can be used to monitor cluster status support path parameters as described in their documentation (e.g., comma-separated lists of index names). However, they do not support query parameters.

  1. _cluster/health
  2. _cluster/stats
  3. _cluster/settings
  4. _nodes/stats
  5. _cat/pending_tasks
  6. _cat/recovery
  7. _cat/snapshots
  8. _cat/tasks

Restrict API fields

If you want to hide fields from the API response that you do not want exposed for alerting, reconfigure the supported_json_payloads.json inside your alerting plugin. The file functions as an allow list for the API fields you want to use in an alert. By default, all APIs and their parameters can be used for monitors and trigger conditions.

However, you can modify the file so that cluster metric monitors can only be created for APIs referenced. Furthermore, only fields referenced in the supported files can create trigger conditions. This supported_json_payloads.json allows for a cluster metrics monitor to be created for the _cluster/stats API, and triggers conditions for the indices.shards.total and indices.shards.index.shards.min fields.

"/_cluster/stats": {
  "indices": [
    "shards.total",
    "shards.index.shards.min"
  ]
}

Painless triggers

Painless scripts define triggers for cluster metrics monitors, similar to query or bucket-level monitors that are defined using the extraction query definition option. Painless scripts are comprised of at least one statement and any additional functions you wish to execute.

The cluster metrics monitor supports up to ten triggers.

In this example, a JSON object creates a trigger that sends an alert when the Cluster Health is yellow. script points the source to the painless script ctx.results[0].status == \"yellow\.

{
  "name": "Cluster Health Monitor",
  "type": "monitor",
  "monitor_type": "query_level_monitor",
  "enabled": true,
  "schedule": {
    "period": {
      "unit": "MINUTES",
      "interval": 1
    }
  },
  "inputs": [
    {
      "uri": {
        "api_type": "CLUSTER_HEALTH",
        "path": "_cluster/health/",
        "path_params": "",
        "url": "http://localhost:9200/_cluster/health/"
      }
    }
  ],
  "triggers": [
    {
      "query_level_trigger": {
        "id": "Tf_L_nwBti6R6Bm-18qC",
        "name": "Yellow status trigger",
        "severity": "1",
        "condition": {
          "script": {
            "source": "ctx.results[0].status == \"yellow\"",
            "lang": "painless"
          }
        },
        "actions": []
      }
    }
  ]
}

See trigger variables for more painless ctx options.

Limitations

Currently, the cluster metrics monitor has the following limitations:

  • You cannot create monitors for remote clusters.
  • The OpenSearch cluster must be in a state where an index’s conditions can be monitored and actions can be executed against the index.
  • Removing resource permissions from a user will not prevent that user’s preexisting monitors for that resource from executing.
  • Users with permissions to create monitors are not blocked from creating monitors for resources for which they do not have permissions; however, those monitors will not execute.