You're viewing version 2.18 of the OpenSearch documentation. This version is no longer maintained. For the latest version, see the current documentation. For information about OpenSearch version maintenance, see Release Schedule and Maintenance Policy.
Letter tokenizer
The letter
tokenizer splits text into words on any non-letter characters. It works well with many European languages but is ineffective with some Asian languages in which words aren’t separated by spaces.
Example usage
The following example request creates a new index named my_index
and configures an analyzer with a letter
tokenizer:
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_letter_analyzer": {
"type": "custom",
"tokenizer": "letter"
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "my_letter_analyzer"
}
}
}
}
Generated tokens
Use the following request to examine the tokens generated using the analyzer:
POST _analyze
{
"tokenizer": "letter",
"text": "Cats 4EVER love chasing butterflies!"
}
The response contains the generated tokens:
{
"tokens": [
{
"token": "Cats",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "EVER",
"start_offset": 6,
"end_offset": 10,
"type": "word",
"position": 1
},
{
"token": "love",
"start_offset": 11,
"end_offset": 15,
"type": "word",
"position": 2
},
{
"token": "chasing",
"start_offset": 16,
"end_offset": 23,
"type": "word",
"position": 3
},
{
"token": "butterflies",
"start_offset": 24,
"end_offset": 35,
"type": "word",
"position": 4
}
]
}