In mathematics, more specifically in linear algebra, the spark of a matrix is the smallest integer such that there exists a set of columns in which are linearly dependent. If all the columns are linearly independent, is usually defined to be 1 more than the number of rows. The concept of matrix spark finds applications in error-correction codes, compressive sensing, and matroid theory, and provides a simple criterion for maximal sparsity of solutions to a system of linear equations.

The spark of a matrix is NP-hard to compute.

Definition edit

Formally, the spark of a matrix   is defined as follows:

 

(Eq.1)

where   is a nonzero vector and   denotes its number of nonzero coefficients[1] (  is also referred to as the size of the support of a vector). Equivalently, the spark of a matrix   is the size of its smallest circuit   (a subset of column indices such that   has a nonzero solution, but every subset of it does not[1]).

If all the columns are linearly independent,   is usually defined to be   (if   has m rows).[2][3]

By contrast, the rank of a matrix is the largest number   such that some set of   columns of   is linearly independent.[4]

Example edit

Consider the following matrix  .

 

The spark of this matrix equals 3 because:

  • There is no set of 1 column of   which are linearly dependent.
  • There is no set of 2 columns of   which are linearly dependent.
  • But there is a set of 3 columns of   which are linearly dependent. The first three columns are linearly dependent because  .

Properties edit

If  , the following simple properties hold for the spark of a   matrix  :

  •   (If the spark equals  , then the matrix has full rank.)
  •   if and only if the matrix has a zero column.
  •  .[4]

Criterion for uniqueness of sparse solutions edit

The spark yields a simple criterion for uniqueness of sparse solutions of linear equation systems.[5] Given a linear equation system  . If this system has a solution   that satisfies  , then this solution is the sparsest possible solution. Here   denotes the number of nonzero entries of the vector  .

Lower bound in terms of dictionary coherence edit

If the columns of the matrix   are normalized to unit norm, we can lower bound its spark in terms of its dictionary coherence:[6][2]

 

Here, the dictionary coherence   is defined as the maximum correlation between any two columns:

 .

Applications edit

The minimum distance of a linear code equals the spark of its parity-check matrix.

The concept of the spark is also of use in the theory of compressive sensing, where requirements on the spark of the measurement matrix are used to ensure stability and consistency of various estimation techniques.[4] It is also known in matroid theory as the girth of the vector matroid associated with the columns of the matrix. The spark of a matrix is NP-hard to compute.[1]

References edit

  1. ^ a b c Tillmann, Andreas M.; Pfetsch, Marc E. (November 8, 2013). "The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing". IEEE Transactions on Information Theory. 60 (2): 1248–1259. arXiv:1205.2081. doi:10.1109/TIT.2013.2290112. S2CID 2788088.
  2. ^ a b Higham, Nicholas J.; Dennis, Mark R.; Glendinning, Paul; Martin, Paul A.; Santosa, Fadil; Tanner, Jared (2015-09-15). The Princeton Companion to Applied Mathematics. Princeton University Press. ISBN 978-1-4008-7447-7.
  3. ^ Manchanda, Pammy; Lozi, René; Siddiqi, Abul Hasan (2017-10-18). Industrial Mathematics and Complex Systems: Emerging Mathematical Models, Methods and Algorithms. Springer. ISBN 978-981-10-3758-0.
  4. ^ a b c Donoho, David L.; Elad, Michael (March 4, 2003), "Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization", Proc. Natl. Acad. Sci., 100 (5): 2197–2202, Bibcode:2003PNAS..100.2197D, doi:10.1073/pnas.0437847100, PMC 153464, PMID 16576749
  5. ^ Elad, Michael (2010). Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing. pp. 24.
  6. ^ Elad, Michael (2010). Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing. pp. 26.