Delen via


LuceneStandardTokenizer Class

Definition

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

public class LuceneStandardTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
type LuceneStandardTokenizer = class
    inherit LexicalTokenizer
Public Class LuceneStandardTokenizer
Inherits LexicalTokenizer
Inheritance
LuceneStandardTokenizer

Constructors

LuceneStandardTokenizer(String)

Initializes a new instance of LuceneStandardTokenizer.

Properties

MaxTokenLength

The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.

Name

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

(Inherited from LexicalTokenizer)

Applies to