Token Filters(令牌过滤器)
分析器按名称引用令牌过滤器。使用现有的或使用 IndexMapping.AddCustomTokenFilter
创建变体:
var m *IndexMapping = index.Mapping()
err := m.AddCustomTokenFilter("color_stop_filter", map[string]interface{}{
"type": stop_tokens_filter.Name,
"tokens": []interface{}{
"red",
"green",
"blue",
},
})
if err != nil {
log.Fatal(err)
}
创建一个名为“color_stop_filter”的新停止令牌过滤器,它删除所有“red”、“green”或“blue”令牌。注册后,自定义分析器可以引用该过滤器。
creates a new Stop Token Filter named “color_stop_filter”, which removes all “red”, “green” or “blue” tokens. Once registered, this filter can be referenced by a custom Analyzer.
Apostrophe(撇号,即')
Configuration:
-
type
:apostrophe_filter.Name
撇号标记过滤器删除撇号后的所有字符。
The Apostrophe Token Filter removes all characters after an apostrophe.
Camel Case
camel case筛选器将以Camel Case书写的令牌拆分为包含它的令牌集。例如,“骆驼箱”这个标记会产生“骆驼”和“箱子”。
The Camel Case Filter splits a token written in camel case into the set of tokens comprising it. For example, the token camelCase
would produce camel
and Case
.
CLD2
CLD2令牌过滤器将从每个令牌中获取文本,并将其传递到[紧凑语言检测2库。每个令牌都被替换为与检测到的国际标准化组织639语言代码相对应的新令牌。输入文本应该已经转换为小写。
The CLD2 Token Filter will take the text from each token and pass it to the Compact Language Detection 2 library. Each token is replaced with a new token corresponding to the ISO 639 language code detected. Input text should already be converted to lower case.
Compound Word Dictionary
复合词词典过滤器允许您提供一个组合形成复合词的词词典,并允许您单独对它们进行索引。
The compound word dictionary filter lets you supply a dictionary of words that combine to form compound words and lets you index them individually.
Edge n-gram
边缘n-gram令牌过滤器将像n-gram令牌过滤器一样计算n-gram,但是所有计算的n-gram都植根于一侧(前面或后面)。
The edge n-gram token filter will compute n-grams just like the n-gram token filter, but all the computed n-grams are rooted at one side (either the front or the back).
Elision
The elision filter identifies and removes articles prefixing a term and separated by an apostrophe.
For example, in French l'avion
becomes avion
.
The elision filter is configured with a reference to a token map containing the articles.
Keyword Marker
The keyword marker filter will identify keywords and mark them as such. Keywords are then ignored by any downstream stemmer.
The keyword marker filter is configured with a token map containing the keywords.
Length
长度过滤器识别太长或太短的标记。有两个参数,最小令牌长度和最大令牌长度。太长或太短的令牌将从令牌流中删除。
The length filter identifies tokens which are either too long or too short. There are two parameters, the minimum token length and the maximum token length. Tokens that are either too long or too short are removed from the token stream.
Lowercase
小写令牌过滤器将检查每个输入令牌,并将所有Unicode字母映射到它们的小写字母。
The Lowercase Token Filter will examine each input token and map all Unicode letters to their lower case.
n-gram
n-gram令牌过滤器根据每个输入令牌计算n-gram。有两个参数,最小和最大n-gram长度。
The n-gram token filter computes n-grams from each input token. There are two parameters, the minimum and maximum n-gram length.
Porter Stemmer
波特词干过滤器将波特词干算法应用于输入令牌。
The porter stemmer filter applies the Porter Stemming Algorithm to the input tokens.
Shingle
瓦片区过滤器根据输入令牌流计算多令牌瓦片区。例如,令牌流“快速棕色狐狸”当配置有最小和最大长度为2的瓦片时,将产生令牌“快速”、“快速棕色”和“棕色狐狸”。
The Shingle filter computes multi-token shingles from the input token stream. For example, the token stream the quick brown fox
when configured with a shingle minimum and maximum length of 2 would produce the tokens the quick
, quick brown
and brown fox
.
Stemmer
词干分析器令牌过滤器接受输入术语,并对它们应用[词干处理。
The stemmer token filter takes input terms and applies a stemming process to them.
这个实现使用了[libstemmer。
This implementation uses libstemmer.
支持的语言有:
The supported languages are:
- Danish
- Dutch
- English
- Finnish
- French
- German
- Hungarian
- Italian
- Norwegian
- Porter
- Portuguese
- Romanian
- Russian
- Spanish
- Swedish
- Turkish
Stop Token
Configuration:
-
type
:stop_tokens_filter.Name
-
stop_token_map
(string): the name of the token map identifying tokens to remove.
The Stop Token Filter is configured with a map of tokens that should be removed from the token stream.
Truncate Token
The truncate token filter truncates each input token to a maximum token length.
Unicode Normalize
The Unicode normalization filter converts the input terms into the specified Unicode Normalization Form.
The supported forms are:
- nfc
- nfd
- nfkc
- nfkd