PythonTip >> 博文 >> python ipython

Statistical Natural Language Processing in Python

zihua 2014-03-31 18:03:02 点击: 1615 | 收藏


Statistical Natural Language Processing in Python.

or ---

How To Do Things With Words. And Counters.

or ---

Everything I Needed to Know About NLP I learned From Sesame Street. Except Kneser-Ney Smoothing. The Count Didn't Cover That.

(1) Data: Word Counts

First we want to know what natural language looks like. I'll concatenate a big bunch of text together as big.txt.

def tokens(text):
    """List all the word tokens (consecutive letters) in a text. 
    Normalize to lowercase."""
    return re.findall('[a-z]+', text.lower()) 

assert tokens('This is a test.') == ['this', 'is', 'a', 'test']

WORDS = Counter(tokens(BIG))

(2) Task: Spelling Correction

Given a word w, find the most likely correction c = correct(w).

Approach: Try all candidates c that are near w. Choose the most likely one.

How to balance near and likely?

For now, in a trivial way: always prefer nearer, but when there is a tie on nearness, use the word with the highest WORDS count. Measure nearness by edit distance: the minimum number of deletions, transpositions, insertions, or replacements. We will define a function edits1(word) to return the set of candidate words that are one edit away. For example, given "wird", this would include "weird" (inserting an e) and "word" (replacing a i with a o), and also "iwrd" (transposing w and i). How could we get them? One way is to split the original word in all possible places, forming a pair of words before and after the place (where one word might be empty), and at each place, either delete, transpose, replace, or insert a letter:

pairs: Ø+wird w+ird wi+rd wir+dwird+Ø
deletions: Ø+irdw+rdwi+dwir+Ø
transpositions: Ø+iwrdw+ridwi+dr
replacements: Ø+?+irdw+?+rdwi+?+dwir+?
insertions: Ø+?+wirdw+?+irdwi+?+rdwir+?+dwird+?+Ø

In []:

def edits1(word):
    "Return all strings that are one edit away from this word."
    pairs      = splits(word)
    deletes    = [a+b[1:]           for (a, b) in pairs if b]
    transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1]
    replaces   = [a+c+b[1:]         for (a, b) in pairs for c in alphabet if b]
    inserts    = [a+c+b             for (a, b) in pairs for c in alphabet]
    return set(deletes + transposes + replaces + inserts)

def splits(word):
    "Return a list of all possible (first, rest) pairs that comprise word."
    return [(word[:i], word[i:]) 
            for i in range(len(word)+1)]

alphabet = 'abcdefghijklmnopqrstuvwxyz'
splits('wird')

Now we're ready to write correct(word) to consider candidates up to some edit distance away (I think edit distance 2 will do), and choose the most common one with the lowest edit distance. We will define known(words) to return the subset of words that are known (in the dictionary).

In []:

def correct(word):
    "Find the best spelling correction for this word."
    # Prefer edit distance 0, then 1, then 2; otherwise default to word itself.
    candidates = (known(edits0(word)) or 
                  known(edits1(word)) or 
                  known(edits2(word)) or [word])
    return max(candidates, key=WORDS.get)

In []:

def known(words):
    "Return the subset of words that are actually in the dictionary."
    return filter(WORDS.get, words)

def edits0(word): return {word}

def edits2(word):
    "Return all strings that are two edits away from this word."
    return set(e2 for e1 in edits1(word) for e2 in edits1(e1))
def correct_text(text):
    "Correct all the words within a text, returning the corrected text."
    return re.sub('[a-zA-Z]+', correct_match, text)

def correct_match(match):
    "Spell-correct word in match, and preserve proper upper/lower/title case."
    word = match.group()
    return case_of(word)(correct(word.lower()))

def case_of(text):
    "Return the case-function appropriate for text: upper, lower, title, or just str."
    return (str.upper if text.isupper() else
            str.lower if text.islower() else
            str.title if text.istitle() else
            str)

(3) Theory: From Counts to Probabilities of Word Sequences

What is the probability of a sequence of words?

$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_1 w_2) \ldots \times \ldots P(w_n \mid w_1 \ldots w_{n-1})$

A wrong approximation:

$P(w_1 \ldots w_n) = P(w_1) \times P(w_2) \times P(w_3) \ldots \times \ldots P(w_n)$

This is called the bag of words model: it is equivalent to taking all the words from a text, cutting them into slips of paper with one word each, putting them in a paper bag, shaking it up, and drawing words out one at a time.

All models are wrong, but some are useful. -- George Box

A less-wrong approximation:

$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$

This is called the bigram model, and is equivalent to doing the same, but making slips of paper with two words on them, and having multiple bags, and putting each slip into a bag labelled with the first word on the slip.

Let's start by defining the probability of a string of words based on a bag of words model:

In []:

def Pwords(words):
    "Probability of words, assuming each word is independent of others."
    return product(Pword(w) for w in words)

def product(nums):
    "Multiply the numbers together.  (Like `sum`, but with multiplication.)"
    result = 1
    for x in nums:
        result *= x
    return result

def pdist(counter):
    "Make a probability distribution, given evidence from a Counter."
    N = sum(counter.values())
    return lambda x: counter[x]/N

Pword = pdist(WORDS)
tests = ['this is a test', 'this is a unusual test',
         'this is a neverbeforeseen test']

for test in tests:
    print Pwords(tokens(test)), test

(4) Task: Word Segmentation

Task: given a sequence of characters with no spaces separating words, recover the sequence of words.

Why? Languages with no word delimiters: 不带空格的话

In English, sub-genres with no word delimiters (spelling errors, URLs): 12

Approach 1: Enumerate all candidate segementations and choose the one with highest Pwords

Problem: how many segmentations are there for an n-character text?

Approach 2: Make one segmentation, into a first word and remaining characters. If we assume words are independent (a wrong assumption) then we can maximize the probability of the first word adjoined to the best segmentation of the remaining characters.

assert segment('choosespain') == ['choose', 'spain']

segment('choosespain') ==
   max(Pwords(['c'] + segment('hoosespain')),
       Pwords(['ch'] + segment('oosespain')),
       Pwords(['cho'] + segment('osespain')),
       Pwords(['choo'] + segment('sespain')),
       ...
       Pwords(['choosespain'] + segment('')))

To make this somewhat efficient, we need to avoid re-computing the segmentations of the remaining characters. This can be done explicitly by dynamic programming or implicitly with memoization. Also, we shouldn't consider all possible lengths for the first word; we can impose a maximum length. What should it be? A little more than the longest word seen so far.

In []:

def memo(f):
    "Memoize function f, whose args must all be hashable."
    cache = {}
    def fmemo(*args):
        if args not in cache:
            cache[args] = f(*args)
        return cache[args]
    fmemo.cache = cache
    return fmemo
max(map(len, WORDS))
@memo
def segment(text):
    "Return a list of words that is the most probable segmentation of text."
    if not text: 
        return []
    else:
        candidates = ([first] + segment(rest) 
                      for (first, rest) in splits(text, 1))
        return max(candidates, key=Pwords)

def splits(text, start=0, L=20):
    "Return a list of all (first, rest) pairs; with start<=len(first)<=L."
    return [(text[:i], text[i:]) 
            for i in range(start, min(len(text), L)+1)]
decl = tokens("""When in the Course of human events, it becomes necessary 
              for one people to dissolve the political bands which have 
              connected them with another, , and to assume among 
              the powers of the earth, the separate and equal station 
              to which the Laws of Nature and of Nature's God entitle them""")

cat = ''.join

print cat(decl)

print segment(cat(decl))
  • Looks pretty good!
  • The bag-of-words assumption is a limitation.
  • Recomputing Pwords on each recursive call is somewhat inefficient.
  • At 50 words, we're at 1.5e-141; getting "close" to the limit of around 1e-323, which should get us only to about 100 words. Beyond that, we'll need to use logarithms, or other tricks to avoid numeric underflow.

Let's move up from millions to billions of words. Once we have that amount of data, we can start to look at two word sequences, without them being too sparse. I happen to have data files available in the format of "word \t count", and bigram data in the form of "word1 word2 \t count". Let's arrange to read them in:

In []:

def load_counts(filename, sep='\t'):
    """Return a Counter initialized from key-value pairs, 
    one on each line of filename."""
    C = Counter()
    for line in open(filename):
        key, count = line.split(sep)
        C[key] = int(count)
    return C

WORDS1 = load_counts('count_1w.txt')
WORDS2 = load_counts('count_2w.txt')

P1w = pdist(WORDS1)
P2w = pdist(WORDS2)

(6) Theory and Practice: Segmentation With Bigram Data

Recall that the less-wrong bigram model approximation to English is:

$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$

where the conditional probability of a word given the previous word is defined as:

$P(w_n \mid w_{n-1}) = P(w_{n-1}w_n) / P(w_{n-1}) $

In []:

def Pwords2(words, prev='<S>'):
    "The probability of a sequence of words, using bigram data, given prev word."
    return product(cPword(w, (prev if (i == 0) else words[i-1]) )
                   for (i, w) in enumerate(words))

# Change Pwords to use P1w instead of Pword
def Pwords(words):
    "Probability of words, assuming each word is independent of others."
    return product(P1w(w) for w in words)

def cPword(word, prev):
    "Conditional probability of word, given previous word."
    bigram = prev + ' ' + word
    if P2w(bigram) > 0 and P1w(prev) > 0:
        return P2w(bigram) / P1w(prev)
    else: # Average the back-off value and zero.
        return P1w(word) / 2
@memo 
def segment2(text, prev='<S>'): 
    "Return best segmentation of text; use bigram data." 
    if not text: 
        return []
    else:
        candidates = ([first] + segment2(rest, first) 
                      for (first, rest) in splits(text, 1))
        return max(candidates, key=lambda words: Pwords2(words, prev))

(7) Theory: Evaluation

So far, we've got an intuitive feel for how this all works. But we don't have any solid metrics that quantify the results. Without metrics, we can't say if we are doing well, nor if a change is an improvement. In general, when developing a program that relies on data to help make predictions, it is good practice to divide your data into three sets:

  1. Training set: the data used to create our spelling model; this was the big.txt file.
  2. Development set: a set of input/output pairs that we can use to rank the performance of our program as we are developing it.
  3. Test set: another set of input/output pairs that we use to rank our program after we are done developing it. The development set can't be used for this purpose—once the programmer has looked at the development test it is tainted, because the programmer might modify the program just to pass the development test. That's why we need a separate test set that is only looked at after development is done.

For this program, the training data is the word frequency counts, the development set is the examples like "choosespain" that we have been playing with, and now we need a test set.

In []:

def test_segmenter(segmenter, tests):
    "Try segmenter on tests; report failures; return fraction correct."
    return sum([test_one_segment(segmenter, test) 
               for test in tests]), len(tests)

def test_one_segment(segmenter, test):
    words = tokens(test)
    result = segmenter(cat(words))
    correct = (result == words)
    if not correct:
        print 'expected', words
        print 'got     ', result
    return correct

proverbs = ("""A little knowledge is a dangerous thing
  A man who is his own lawyer has a fool for his client
  All work and no play makes Jack a dull boy
  Better to remain silent and be thought a fool that to speak and remove all doubt;
  Do unto others as you would have them do to you
  Early to bed and early to rise, makes a man healthy, wealthy and wise
  Fools rush in where angels fear to tread
  Genius is one percent inspiration, ninety-nine percent perspiration
  If you lie down with dogs, you will get up with fleas
  Lightning never strikes twice in the same place
  Power corrupts; absolute power corrupts absolutely
  Here today, gone tomorrow
  See no evil, hear no evil, speak no evil
  Sticks and stones may break my bones, but words will never hurt me
  Take care of the pence and the pounds will take care of themselves
  Take care of the sense and the sounds will take care of themselves
  The bigger they are, the harder they fall
  The grass is always greener on the other side of the fence
  The more things change, the more they stay the same
  Those who do not learn from history are doomed to repeat it"""
  .splitlines())

This confirms that both segmenters are very good, and that segment2 is slightly better. There is much more that can be done in terms of the variety of tests, and in measuring statistical significance.

(8) Smoothing

Let's go back to a test we did before, and add two more test cases:

tests = ['this is a test', 
         'this is a unusual test',
         'this is a nongovernmental test',
         'this is a neverbeforeseen test',
         'this is a zqbhjhsyefvvjqc test']

for test in tests:
    print Pwords(tokens(test)), test

The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that "nongovernmental" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some "real" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelyhood of being a "real" word. It does seem that "neverbeforeseen" is more English-like than "zqbhjhsyefvvjqc", and so perhaps should have a higher probability.

We can address this by assigning a non-zero probability to words that are not in the dictionary. This is even more important when it comes to multi-word phrases (such as bigrams), because it is more likely that a legitimate one will appear that has not been observed before.

We can think of our model as being overly spiky; it has a spike of probability mass wherever a word or phrase occurs in the corpus. What we would like to do is smooth over those spikes so that we get a model that does not depend on the details of our corpus. The process of "fixing" the model is called smoothing.

Pierre Simon Laplace, 1749-1827

In []:

def pdist_additive_smoothed(counter, c=1):
    """The probability of word, given evidence from the counter.
    Add c to the count for each item, plus the 'unknown' item."""
    N = sum(counter.values())
    Nnew = N + c * (len(counter) + 1) # add new fake observations
    return lambda word: (counter[word] + c) / Nnew 

P1w = pdist_additive_smoothed(WORDS1)
P1w('neverbeforeseen')

But now there's a problem ... we now have previously-unseen words with non-zero probabilities. And maybe 10-12 is about right for words that are observed in text: that is, if I'm reading a new text, the probability that the next word is unknown might be around 10-12. But if I'm manufacturing 20-letter sequences at random, the probability that one will be a word is much, much lower than 10-12.

Look what happens:

segment('thisisatestofsegmentationofalongsequenceofwords')

There are two problems:

First, we don't have a clear model of the unknown words. We just say "unkown" but we don't distinguish likely unknown from unlikely unknown. For example, is a 8-character unknown more likely than a 20-character unknown?

Second, we don't take into account evidence from parts of the unknown. For example, letter subsequences of words, or (n-1)-gram subsequences of n-grams.

For our next approach, Good - Turing smoothing re-estimates the probability of zero-count words, based on the probability of one-count words (and can also re-estimate for higher-number counts, but that is less interesting).

I. J. Good (1916 - 2009)             Alan Turing (1812 - 1954)

So, how many one-count words are there in WORDS? (There aren't any in WORDS1.) And what are the word lengths of them? Let's find out:

def pdist_good_turing_hack(counter, onecounter, base=1/26., prior=1e-9):
    """The probability of word, given evidence from the counter.
    For unknown words, look at the one-counts from onecounter, based on length.
    This gets ideas from Good-Turing, but doesn't implement all of it.
    prior is an additional factor to make unknowns less likely.
    base is how much we attenuate probability for each letter beyond longest."""
    N = sum(counter.values())
    N2 = sum(onecounter.values())
    lengths = map(len, [w for w in onecounter if onecounter[w] == 1])
    ones = Counter(lengths)
    longest = max(ones)
    return (lambda word: 
            counter[word] / N if (word in counter) 
            else prior * (ones[len(word)] / N2 or 
                          ones[longest] / N2 * base ** (len(word)-longest)))

# Redefine P1w
P1w = pdist_good_turing_hack(WORDS1, WORDS)
segment.cache.clear()
segment('thisisatestofsegmentationofalongsequenceofwords')

That was somewhat unsatisfactory. We really had to crank up the prior, specifically because the process of running segment generates so many non-word candidates (and also because there will be fewer unknowns with respect to the billion-word WORDS1 than with respect to the million-word WORDS). It would be better to separate out the prior from the word distribution, so that the same distribution could be used for multiple tasks, not just for this one.

Now let's think for a short while about smoothing bigram counts. Specifically, what if we haven't seen a bigram sequence, but we've seen both words individually? For example, to evaluate P("Greenland") in the phrase "turn left at Greenland", we might have three pieces of evidence:

P("Greenland")
P("Greenland" | "at")
P("Greenland" | "left", "at")

Presumably, the first would have a relatively large count, and thus large reliability, while the second and third would have decreasing counts and reliability. With interpolation smoothing we combine all three pieces of evidence, with a linear combination:

$P(w_3 \mid w_1w_2) = c_1 P(w_3) + c_2 P(w_3 \mid w_2) + c_3 P(w_3 \mid w_1w_2)$

How do we choose $c_1, c_2, c_3$? By experiment: train on training data, maximize $c$ values on development data, then evaluate on test data.

However, when we do this, we are saying, with probability $c_1$, that a word can appear anywhere, regardless of previous words. But some words are more free to do that than other words. Consider two words with similar probability:

print P1w('francisco')
print P1w('individuals')

They have similar unigram probabilities but differ in their freedom to be the second word of a bigram:

Intuitively, words that appear in many bigrams before are more likely to appear in a new, previously unseen bigram. In Kneser-Ney smoothing (Reinhard Kneser, Hermann Ney) we multiply the bigram counts by this ratio. But I won't implement that here, because The Count never covered it.

(9) One More Task: Secret Codes

Let's tackle one more task: decoding secret codes. We'll start with the simplest of codes, a rotation cipher, sometimes called a shift cipher or a Caesar cipher (because this was state-of-the-art crypotgraphy in 100 BC). First, a method to encode:

In []:

def rot(msg, n=13): 
    "Encode a message with a rotation (Caesar) cipher." 
    return encode(msg, alphabet[n:]+alphabet[:n])

def encode(msg, key): 
    "Encode a message with a substitution cipher." 
    table = string.maketrans(upperlower(alphabet), upperlower(key))
    return msg.translate(table) 

def upperlower(text): return text.upper() + text.lower()  
def decode_rot(secret):
    "Decode a secret message that has been encoded with a rotation cipher."
    candidates = [rot(secret, i) for i in range(1, len(alphabet)+1)]
    return max(candidates, key=lambda msg: Pwords(tokens(msg)))
decode_rot(rot('This is a secret message.'))

Let's make it a tiny bit harder. When the secret message contains separate words, it is too easy to decode by guessing that the one-letter words are most likely "I" or "a". So what if the encode routine mushed all the letters together:

In []:

def encode(msg, key): 
    "Encode a message with a substitution cipher; remove non-letters." 
    msg = cat(c for c in msg.lower() if c in alphabet)  ## Change here
    table = string.maketrans(upperlower(alphabet), upperlower(key))
    return msg.translate(table) 
原文链接:http://www.wumii.com/item/YDkD2Qyf

作者:zihua | 分类: python ipython | 标签: python ipython | 阅读: 1615 | 发布于: 2014-03-31 18时 |