# Low-discrepancy sequence

In mathematics, a low-discrepancy sequence is a sequence with the property that for all N, the subsequence x1, ..., xN is almost uniformly distributed (in a sense to be made precise), and x1, ..., xN+1 is almost uniformly distributed as well.

The reader may wish to consider this illustration of a low-discrepancy sequence. Low-discrepancy sequences are also called quasi-random or sub-random sequences, due to their use in situations similar to those when pseudorandom or random numbers are used instead. The "quasi" modifier is used to denote more clearly that the numbers are not random, but have useful properties similar to randomness in certain applications such as the quasi-Monte Carlo method.

The notion of uniformity is made precise as the discrepancy defined below. Roughly speaking, the discrepancy of a sequency is low if the number of points falling into a set B is close to the number one would expect from the measure of B.

At least three methods of numerical integration can be phrased as follows. Given a set x1, ..., xN in the interval [0,1], approximate the integral of a function f as the average of the function evaluated at those points:

[itex] \int_0^1 f(u)\,du \approx \frac{1}{N}\,\sum_{i=1}^N f(x_i). [itex]

If the points are chosen as xi = i/N, this is the rectangle rule. If the points are chosen to be randomly (or pseudorandomly) distributed, this is the Monte Carlo method. If the points are chosen as elements of a low-discrepancy sequence, this is the quasi-Monte Carlo method. A remarkable result, the Koksma-Hlawka inequality, shows that the error of such a method can be bounded by the product of two terms, one of which depends only on f, and another which is the discrepancy of the set x1, ..., xN. The Koksma-Hlawka inequality is stated below.

It is convenient to construct the set x1, ..., xN in such a way that if a set with N+1 elements is constructed, the previous N elements need not be recomputed. The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed if N is increased. Elements need not be recomputed in the Monte Carlo method if N is increased, but the point sets do not have minimal discrepancy. By using low-discrepancy sequences, the quasi-Monte Carlo method has the desirable features of the other two methods.

 Contents

## Definition of discrepancy

The Star-Discrepancy is defined as follows, using Niederreiter's notation.

[itex] D^*_N(P) = \sup_{B\in J^*}
 \left|  \frac{A(B;P)}{N} - \lambda_s(B)  \right|[itex]


where P is the set x1, ..., xN, λs is the s-dimensional Lebesgue measure, A(B;P) is the number of points in P that fall into B, and J* is the set of intervals of the form

[itex] \prod_{i=1}^s [0, u_i) [itex]

where ui is in the half-open interval [0, 1). Therefore

[itex]

D^*_N(P) =\|{\rm disc}\|_\infty [itex]

where the discrepancy function is defined by

[itex]

{\rm disc}(y)=\frac{A([0,y);P)}{N}-\lambda_s([0,y)). [itex]

## The Koksma-Hlawka inequality

Let Īs be the s-dimensional unit cube, Īs = [0, 1] × ... × [0, 1]. Let f have bounded variation V(f) on Īs in the sense of Hardy and Krause. Then for any x1, ..., xN in Is = [0, 1) × ... × [0, 1),

[itex] \left| \frac{1}{N} \sum_{i=1}^N f(x_i)
     - \int_{\bar I^s} f(u)\,du \right|
\le V(f)\, D_N^* (x_1,\ldots,x_N).


[itex]

The Koksma Hlawka inequality is sharp in the following sense:

For any point set x1,...,xN in Is and any

[itex]\epsilon>0[itex],

there is a function f with bounded variation and V(f)=1 such that

[itex]

\left| \frac{1}{N} \sum_{i=1}^N f(x_i)

     - \int_{\bar I^s} f(u)\,du \right|>D_{N}^{*}(x_1,\ldots,x_N)-\epsilon.


[itex]

Therefore, the quality of a numerical integration rule depends only on the discrepancy D*N(x1,...,xN).

## The formula of Hlawka-Zaremba

Let [itex]D=\{1,2,\ldots,d\}[itex]. For [itex]\emptyset\neq u\subseteq D[itex] we write

[itex]

dx_u:=\prod_{j\in u} dx_j [itex] and denote by [itex](x_u,1)[itex] the point obtained from [itex]x[itex] by replacing the coordinates not in [itex]u[itex] by [itex]1[itex]. Then

[itex]

\frac{1}{N} \sum_{i=1}^N f(x_i)

     - \int_{\bar I^s} f(u)\,du=


\sum_{\emptyset\neq u\subseteq D}(-1)^{|u|} \int_{[0,1]^{|u|}}{\rm disc}(x_u,1)\frac{\partial^{|u|}}{\partial x_u}f(x_u,1) dx_u. [itex]

## The [itex]L^2[itex] version of the Koksma-Hlawka inequality

Applying the Cauchy-Schwarz inequality for integrals and sums to the Hlawka-Zaremba identity, we obtain an [itex]L^2[itex] version of the Koksma-Hlawka inequality:

[itex]

\left|\frac{1}{N} \sum_{i=1}^N f(x_i)

     - \int_{\bar I^s} f(u)\,du\right|\le


\|f\|_{d}\,{\rm disc}_{d}(\{t_i\}), [itex] where

[itex]

{\rm disc}_{d}(\{t_i\})=\left(\sum_{\emptyset\neq u\subseteq D} \int_{[0,1]^{|u|}}{\rm disc}(x_u,1)^2 dx_u\right)^{1/2} [itex] and

[itex]

\|f\|_{d}=\left(\sum_{u\subseteq D} \int_{[0,1]^{|u|}} \left|\frac{\partial^{|u|}}{\partial x_u}f(x_u,1)\right|^2 dx_u\right)^{1/2}. [itex]

## The Erdős-Turan-Koksma inequality

It is computationally hard to find the exact value of the discrepancy of large point sets. The Paul Erdős-Turán-Koksma inequality provides an upper bound.

Let x1,...,xN be points in Is and H be an arbitrary positive integer. Then

[itex]

D_{N}^{*}(x_1,\ldots,x_N)\leq \left(\frac{3}{2}\right)^s \left( \frac{2}{H+1}+ \sum_{0<\|h\|_{\infty}\leq H}\frac{1}{r(h)} \left| \frac{1}{N} \sum_{n=1}^{N} e^{2\pi i\langle h,x_n\rangle} \right| \right) [itex]

where

[itex]

## The main conjectures

Conjecture 1. There is a constant cs depending only on s, such that

[itex]D_{N}^{*}(x_1,\ldots,x_N)\geq c_s\frac{(\ln N)^{s-1}}{N}[itex]

for any finite point set x1,...,xN.

Conjecture 2. There is a constant c's depending only on s, such that

[itex]D_{N}^{*}(x_1,\ldots,x_N)\geq c'_s\frac{(\ln N)^{s}}{N}[itex]

for any infinite sequence x1,x2,x3,....

These conjectures are equivalent. They have been proved for s ≤ 2 by W. M. Schmidt. In higher dimensions, the corresponding problem is still open. The best-known lower bounds are due to K. F. Roth.

## The best-known sequences

Constructions of sequences are known (due to Faure, Halton, Hammersley, Sobol, Niederreiter and Van der Corput) such that

[itex]

D_{N}^{*}(x_1,\ldots,x_N)\leq C\frac{(\ln N)^{s}}{N}. [itex]

where C is a certain constant, depending of the sequence. After Conjecture 2., these sequences are believed to have the best possible order of convergence. See also: Halton sequences.

## Lower bounds

Let s = 1. Then

[itex]

D_N^*(x_1,\ldots,x_N)\geq\frac{1}{2N} [itex]

for any finite point set x1, ..., xN.

Let s = 2. W. M. Schmidt proved that for any finite point set x1, ..., xN,

[itex]

D_N^*(x_1,\ldots,x_N)\geq C\frac{\log N}{N} [itex]

where

[itex]

C=\max_{a\geq3}\frac{1}{16}\frac{a-2}{a\log a}=0.02333... [itex]

For arbitrary dimensions s > 1, K.F. Roth proved that

[itex]

D_N^*(x_1,\ldots,x_N)\geq\frac{1}{2^{4s}}\frac{1}{((s-1)\log2)^\frac{s-1}{2}}\frac{\log^{\frac{s-1}{2}}N}{N} [itex]

for any finite point set x1, ..., xN. This bound is the best known for s > 3.

## References

• Harald Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-295-5
• Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Math., 1651, Springer, Berlin, 1997, ISBN 3-540-62606-9
• William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. Numerical Recipes in C. Cambridge, UK: Cambridge University Press, second edition 1992. ISBN 0-521-43108-5 (see Section 7.7 for a less technical discussion of low-discrepancy sequences)

• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy