Bayesian filtering
|
Bayesian filtering is the process of using Bayesian statistical methods to classify documents into categories.
Bayesian filtering gained attention when it was described in the paper A Plan for Spam (http://www.paulgraham.com/spam.html) by Paul Graham, and has become a popular mechanism to distinguish illegitimate spam email from legitimate "ham" email. Many modern mail programs such as Mozilla Thunderbird implement Bayesian spam filtering. Server-side email filters, such as SpamAssassin and ASSP, make use of Bayesian spam filtering techniques.
Bayesian email filters take advantage of Bayes' theorem. Bayes' theorem, in the context of spam, says that the probability that an email is spam, given that it has certain words in it, is equal to the probability of finding those certain words in spam email, times the probability that any email is spam, divided by the probability of finding those words in any email:
- <math>P(spam|words) = \frac{P(words|spam)P(spam)}{P(words)}<math>
Particular words have particular probabilities of occurring in spam email and in ham email. For instance, most email users will frequently encounter the word Viagra in spam email, but will seldom see it in ham email. The filter doesn't know these probabilities in advance, and must first be trained so it can build them up. To train the filter, the user must manually indicate whether a new email is spam or ham. For all words in each training email, the filter will accordingly adjust the words' spam and ham probabilities in its database. For instance, Bayesian spam filters will typically have learned a very high spam probability for the words "Viagra" and "refinance", but a very low spam probability (and a very high ham probability) for words seen only in ham email, such as the names of friends and family members.
After training, the spam and ham word probabilities (also known as likelihood functions) are used to compute the probability that an email with a particular set of words in it belongs to either the spam or ham category. Each word in the email contributes to the email's spam probability. This contribution is called the posterior probability and is computed using Bayes' theorem. Then, the email's spam probability is added up over all words in the email, and if the total exceeds a certain threshold (say 95%), the filter will mark the email as spam. Email marked as spam can then be automatically moved to a "Junk" email folder, or even deleted outright.
The advantage of Bayesian spam filtering is that it can be trained on a per-user basis. The spam received by a user often has some relevance to her, and defines the characteristic spam likelihood function for her filter. For example, placing a personal ad may increase the amount of personal-ad-related spam that she receives. As a result, her Bayesian spam filter would learn a higher spam probability for words common to personal-ad-related spam, higher than it would if it were trained on some other user's email. The ham that she receives will also tend to be relevant to her. Many of her coworkers, friends, and family members will choose to discuss related subjects, and therefore use similar words, generating a characteristic ham likelihood function. These two likelihood functions are unique to each user and can evolve over time with corrective training whenever the filter incorrectly classifies an email. As a result, Bayesian spam filtering accuracy can be excellent, often superior to pre-defined rules. SpamAssassin can combine the results from both Bayesian spam filtering and pre-defined rules, resulting in even higher filtering accuracy. Recent spammer tactics include insertion of random words that are not normally associated with spam, thereby decreasing the email's spam score and increasing its ham score, making it more likely to slip past a Bayesian spam filter.
While Bayesian filtering is used widely to identify spam email, the technique can classify (or "cluster") almost any sort of data. It has uses in science, medicine, and engineering. One example is a general purpose classification program called AutoClass (http://ic.arc.nasa.gov/ic/projects/bayes-group/autoclass/) which was originally used to classify stars according to spectral characteristics that were otherwise too subtle to notice. There is recent speculation that even the brain uses Bayesian methods to classify sensory stimuli and decide on behavioural responses (Trends in Neuroscience, 27(12):712-9, 2004) (http://www.bcs.rochester.edu/people/alex/pub/articles/KnillPougetTINS04.pdf).