Sparse matrix
|
In the mathematical subfield of numerical analysis a sparse matrix is a matrix populated primarily with zeros. A sparse graph is a graph with a sparse adjacency matrix.
Sparsity is a concept, useful in combinatorics and application areas such as network theory, of a low density of significant data or connections. This concept is amenable to quantitative reasoning. It is also noticeable in everyday life.
Huge sparse matrices often appear in science or engineering when solving problems for linear models.
When storing and manipulating sparse matrices on the computer, it is often necessary to modify the standard algorithms and take advantage of the sparse structure of the matrix. Sparse data is by its nature easily compressed, which can yield enormous savings in memory usage. And more importantly, manipulating huge sparse matrices with the standard algorithms may be impossible due to their sheer size. The definition of huge depends on the hardware and the computer programs available to manipulate the matrix.
Contents |
Definitions
Given a sparse N×M matrix A the row bandwidth for the n-th row is defined as
- <math>b_n(\mathbf{A}) := \mathrm{min}_{1 \le m \le M} \lbrace m \mid a_{n,m} \neq 0 \rbrace <math>
The bandwidth for the matrix is defined as
- <math>B(\mathbf{A}) := \mathrm{max}_{1 \le n \le N} b_n(\mathbf{A})<math>
Example
A bitmap image having only 2 colors, with one of them dominant (say a file that stores a handwritten signature) can be encoded as a sparse matrix that contains only row and column numbers for pixels with the non-dominant color.
Storing a sparse matrix
The naive data structure for a matrix is a two dimensional array. Each entry in the array represents an element ai,j of the matrix and can be accessed by the two indices i and j. For a n×m matrix we need at least (n*m) / 8 bytes to represent the matrix when assuming 1 bit for each entry.
A sparse matrix contains many zero entries. The basic idea when storing sparse matrices is to only store the non-zero entries as opposed to storing all entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to a naive approach.
Diagonal matrix
A very efficient structure for a diagonal matrix is to store just the entries in the main diagonal as a one dimensional array. For n×n matrix we need only n / 8 bytes when assuming 1 bit for each entry.
Reducing the bandwidth
The Cuthill-McKee_algorithm can be used to reduce the bandwidth of a sparse symmetric matrix.
Reducing the fill-in
The fill-in of a matrix are those entries which change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.