WSTA-Review(1):Preprocessing

1. Words and Corpora

  • corpus(corpora)
    A collection of text or speech
  • lemma
    A lemma is a set of forms having the same stem, the same major part-of-speech, and the same word sense.
  • Word Type
    Types are the number of distinct words in a corpus.
  • Word Token
    Tokens are the total number N of running words

e.g. They picnicked by the pool, then lay back on the grass and looked at the stars.

The sentence above has 16 tokens and 14 types.

  • Herdan's Law or Heap's Law
    $$|V|=kN^β$$
    • The number of types ----- |V|

    • The number of tokens ---- N

    • k and β are positive constants, and 0<β<1.

2. Text Normalization

possible procedures

  • Remove unwanted formatting(e.g. HTML)
  • Segment structure(e.g. sentences)
  • Tokenise words
  • Normalise words
  • Remove unwanted words(e.g. Stop words)

2.1 Segmenting/Tokenizing

Tokenization: The task of segmenting running text into words

  • expanding clitic: 'm in I'm expands to am

2.2 Normalizing

  • Normalization: The task of putting words/tokens in a standard format
  • Case folding: everything mapped to lower case
  • Removing morphology
    • Lemmatization: The task of determining that two words have the same root, despite their surface differences.(e.g. is, are, am share lemma be)
lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
def lemmatize(word):
    lemma = lemmatizer.lemmatize(word,'v')
    if lemma == word:
        lemma = lemmatizer.lemmatize(word,'n')
    return lemma
  • Stemming:

    • stems: the central morpheme of the word, supply the main meaning, often not an actual lexical item.
    • affixes: adding 'additional' meanings of various words

    Stemming strips off all affixes, leaving a stem. e.g. automate --> automat

    • the Porter Stemmer(most popular stemmer for English
    stemmer = nltk.stem.porter.PorterStemmer()
    print ([stemmer.stem(token) for token in tokenized_sentence])
    
  • Correct spelling

  • expanding abbreviations

2.3 Segmenting

  • the MaxMatch algorithm
    used to segment in Chinese.

The maximum matching algorithm starts by pointing at the beginning of a string. It chooses the longest word in the dictionary that matches the input at the current position. The pointer is then advanced to the end of that word in the string. If no word matches, the pointer is instead advanced one character. The algorithm is then iteratively applied again starting from the new Pointer position.[1]

The code below is an example of the MaxMatch algorithm in English, however, the algorithm works better in Chinese than English.

def max_match(text_string, dictionary, word_list):
    '''
    @para: text_string: an alphabetic characters string
    @para: dictionary: existing dict
    @word_list: a collection used to store matched words.

    This method is used to find words existed in the dictionary from an alphabetic characters string.
    '''
    if len(text_string) <= 0:
          return word_list
    for i in range(len(text_string), 1, -1):
        first_word = text_string[:i]
        remainder = text_string[i:]
        if lemmatize(first_word) in dictionary: # todo first_word need to be lemma
            break
    else:
        first_word = text_string[0]
        remainder = text_string[1:]

    word_list.append(first_word)
    return max_match(remainder, dictionary, word_list)

  1. J&M3 Ch2, P15

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,486评论 0 10
  • 学习书籍《Sed and Awk 101 Hacks -中文版》,这里只是自己重新整理下,并无新内容,一是加深印象...
    灼灼2015阅读 150评论 0 0
  • GTD小组线下学习② 软件推荐 在开始正式的写内容之前,先推荐两个非常好用的App吧(此处绝对没收广告费哦)~ A...
    西西女神阅读 432评论 0 1
  • 01 焦虑还是危机? 身边有一个同事,年龄都过50了。每次出差事情不顺利的时候,他常常整夜整夜地失眠。第二天一早见...
    富兰克刘阅读 666评论 5 9