Link Search Menu Expand Document Documentation Menu

You're viewing version 2.18 of the OpenSearch documentation. This version is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.

Normalization token filter

The normalization token filter is designed to adjust and simplify text in a way that reduces variations, particularly variations in special characters. It is primarily used to handle variations in writing by standardizing characters in specific languages.

The following normalization token filters are available:

Example

The following example request creates a new index named german_normalizer_example and configures an analyzer with a german_normalization filter:

PUT /german_normalizer_example
{
  "settings": {
    "analysis": {
      "filter": {
        "german_normalizer": {
          "type": "german_normalization"
        }
      },
      "analyzer": {
        "german_normalizer_analyzer": {
          "tokenizer": "standard",
          "filter": [
            "lowercase", 
            "german_normalizer"
          ]
        }
      }
    }
  }
}

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

POST /german_normalizer_example/_analyze
{
  "text": "Straße München",
  "analyzer": "german_normalizer_analyzer"
}

The response contains the generated tokens:

{
  "tokens": [
    {
      "token": "strasse",
      "start_offset": 0,
      "end_offset": 6,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "munchen",
      "start_offset": 7,
      "end_offset": 14,
      "type": "<ALPHANUM>",
      "position": 1
    }
  ]
}
350 characters left

Have a question? .

Want to contribute? or .