Heaps' law
|
In linguistics, Heaps' law is an empirical law which describes the portion of a vocabulary which is represented by an instance document (or set of instance documents) consisting of words chosen from the vocabulary. This can be formulated as
- <math> V_R(n) = Kn^\beta <math>
Where VR is the subset of the vocabulary V represented by the instance text of size n. K and β are free parameters determined empirically.
With English text corpora, typically K is between 10 and 100, and β is between 0.4 and 0.6.
Heaps_law_plot.png
Image:Heaps_law_plot.png
A typical Heaps-law plot. The x-axis represents the text size, and the y-axis represents the number of distinct vocabulary elements present in the text. Compare the values of the two axes.
Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn.
It is interesting to note that Heaps' law applies in the general case where the "vocabulary" is just some set of distinct types which are attributes of some collection of objects. For example, the objects could be people, and the types could be country of origin of the person. If persons are selected randomly (that is, we are not selecting based on country of origin), then Heaps' law says we will quickly have representatives from most countries (in proportion to their population) but it will become increasingly difficult to cover the entire set of countries by continuing this method of sampling.
See also
References
- H. S. Heaps. Information Retrieval - Computational and Theoretical Aspects. Academic Press, 1978.
- Baeza-Yates and Ribeiro-Neto, Modern Information Retrieval, ACM Press, 1999.