Expander walk sampling

In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).

Statement edit

Let   be an n-vertex expander graph with positively weighted edges, and let  . Let   denote the stochastic matrix of the graph, and let   be the second largest eigenvalue of  . Let   denote the vertices encountered in a  -step random walk on   starting at vertex  , and let    . Where  

(It is well known[1] that almost all trajectories   converges to some limiting point,  , as   .)

The theorem states that for a weighted graph   and a random walk   where   is chosen by an initial distribution  , for all  , we have the following bound:

 

Where   is dependent on   and  .

The theorem gives a bound for the rate of convergence to   with respect to the length of the random walk, hence giving a more efficient method to estimate   compared to independent sampling the vertices of  .

Proof edit

In order to prove the theorem, we provide a few definitions followed by three lemmas.

Let   be the weight of the edge   and let   Denote by  . Let   be the matrix with entries  , and let  .

Let   and  . Let   where   is the stochastic matrix,   and  . Then:

 

Where  . As   and   are symmetric, they have real eigenvalues. Therefore, as the eigenvalues of   and   are equal, the eigenvalues of   are real. Let   and   be the first and second largest eigenvalue of   respectively.

For convenience of notation, let  ,  ,  , and let   be the all-1 vector.

Lemma 1

 

Proof:

By Markov's inequality,

 

Where   is the expectation of   chosen according to the probability distribution  . As this can be interpreted by summing over all possible trajectories  , hence:

 

Combining the two results proves the lemma.

Lemma 2

For  ,

 

Proof:

As eigenvalues of   and   are equal,

 

Lemma 3

If   is a real number such that  ,

 

Proof summary:

We Taylor expand   about point   to get:

 

Where   are first and second derivatives of   at  . We show that   We then prove that (i)   by matrix manipulation, and then prove (ii)  using (i) and Cauchy's estimate from complex analysis.

The results combine to show that

 
A line to line proof can be found in Gilman (1998)[1]

Proof of theorem

Combining lemma 2 and lemma 3, we get that

 

Interpreting the exponent on the right hand side of the inequality as a quadratic in   and minimising the expression, we see that

 

A similar bound

 

holds, hence setting   gives the desired result.

Uses edit

This theorem is useful in randomness reduction in the study of derandomization. Sampling from an expander walk is an example of a randomness-efficient sampler. Note that the number of bits used in sampling   independent samples from   is  , whereas if we sample from an infinite family of constant-degree expanders this costs only  . Such families exist and are efficiently constructible, e.g. the Ramanujan graphs of Lubotzky-Phillips-Sarnak.

References edit

  1. ^ Doob, J.L. (1953). Stochastic Processes. Theorem 6.1: Wiley.{{cite book}}: CS1 maint: location (link)
  • Ajtai, M.; Komlós, J.; Szemerédi, E. (1987). "Deterministic simulation in LOGSPACE". Proceedings of the nineteenth annual ACM symposium on Theory of computing. STOC '87. ACM. pp. 132–140. doi:10.1145/28395.28410. ISBN 0897912217.
  • Gillman, D. (1998). "A Chernoff Bound for Random Walks on Expander Graphs". SIAM Journal on Computing. 27 (4). Society for Industrial and Applied Mathematics: 1203–1220. doi:10.1137/S0097539794268765. S2CID 26319459.
  • Doob, J.L. (1953), Stochastic Processes, vol. Theorem 6.1, Wiley