Link Search Menu Expand Document Documentation Menu

Arabic analyzer

The built-in arabic analyzer can be applied to a text field using the following command:

PUT /arabic-index
{
  "mappings": {
    "properties": {
      "content": {
        "type": "text",
        "analyzer": "arabic"
      }
    }
  }
}

Stem exclusion

You can use stem_exclusion with this language analyzer using the following command:

PUT index_with_stem_exclusion_arabic
{
  "settings": {
    "analysis": {
      "analyzer": {
        "stem_exclusion_arabic_analyzer":{
          "type":"arabic",
          "stem_exclusion":["تكنولوجيا","سلطة "]
        }
      }
    }
  }
}

Arabic analyzer internals

The arabic analyzer is built using the following components:

  • Tokenizer: standard

  • Token filters:

    • lowercase
    • decimal_digit
    • stop (Arabic)
    • normalization (Arabic)
    • keyword
    • stemmer (Arabic)

Custom Arabic analyzer

You can create a custom Arabic analyzer using the following command:

PUT /arabic-index
{
  "settings": {
    "analysis": {
      "filter": {
        "arabic_stop": {
          "type": "stop",
          "stopwords": "_arabic_"
        },
        "arabic_stemmer": {
          "type": "stemmer",
          "language": "arabic"
        },
        "arabic_normalization": {
          "type": "arabic_normalization"
        },
        "decimal_digit": {
          "type": "decimal_digit"
        },
        "arabic_keywords": {
          "type":       "keyword_marker",
          "keywords":   [] 
        }
      },
      "analyzer": {
        "arabic_analyzer": {
          "type": "custom",
          "tokenizer": "standard",
          "filter": [
            "lowercase",
            "arabic_normalization",
            "decimal_digit",
            "arabic_stop",
            "arabic_keywords",
            "arabic_stemmer"
          ]
        }
      }
    }
  },
  "mappings": {
    "properties": {
      "content": {
        "type": "text",
        "analyzer": "arabic_analyzer"
      }
    }
  }
}

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

POST /arabic-index/_analyze
{
  "field": "content",
  "text": "الطلاب يدرسون في الجامعات العربية. أرقامهم ١٢٣٤٥٦."
}

The response contains the generated tokens:

{
  "tokens": [
    {
      "token": "طلاب",
      "start_offset": 0,
      "end_offset": 6,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "يدرس",
      "start_offset": 7,
      "end_offset": 13,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "جامع",
      "start_offset": 17,
      "end_offset": 25,
      "type": "<ALPHANUM>",
      "position": 3
    },
    {
      "token": "عرب",
      "start_offset": 26,
      "end_offset": 33,
      "type": "<ALPHANUM>",
      "position": 4
    },
    {
      "token": "ارقامهم",
      "start_offset": 35,
      "end_offset": 42,
      "type": "<ALPHANUM>",
      "position": 5
    },
    {
      "token": "123456",
      "start_offset": 43,
      "end_offset": 49,
      "type": "<NUM>",
      "position": 6
    }
  ]
}
350 characters left

Have a question? .

Want to contribute? or .