Share via


LexicalTokenizerName Struct

Definition

Defines the names of all tokenizers supported by the search engine.

public readonly struct LexicalTokenizerName : IEquatable<Azure.Search.Documents.Indexes.Models.LexicalTokenizerName>
type LexicalTokenizerName = struct
Public Structure LexicalTokenizerName
Implements IEquatable(Of LexicalTokenizerName)
Inheritance
LexicalTokenizerName
Implements

Constructors

LexicalTokenizerName(String)

Initializes a new instance of LexicalTokenizerName.

Properties

Classic

Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html.

EdgeNGram

Tokenizes the input from an edge into n-grams of the given size(s). See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html.

Keyword

Emits the entire input as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html.

Letter

Divides text at non-letters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html.

Lowercase

Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NGram

Tokenizes the input into n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html.

PathHierarchy

Tokenizer for path-like hierarchies. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html.

Pattern

Tokenizer that uses regex pattern matching to construct distinct tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html.

Standard

Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html.

UaxUrlEmail

Tokenizes urls and emails as one token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html.

Whitespace

Divides text at whitespace. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html.

Methods

Equals(LexicalTokenizerName)

Indicates whether the current object is equal to another object of the same type.

ToString()

Returns the fully qualified type name of this instance.

Operators

Equality(LexicalTokenizerName, LexicalTokenizerName)

Determines if two LexicalTokenizerName values are the same.

Implicit(String to LexicalTokenizerName)

Converts a String to a LexicalTokenizerName.

Inequality(LexicalTokenizerName, LexicalTokenizerName)

Determines if two LexicalTokenizerName values are not the same.

Applies to