McDiarmid's inequality

In probability theory and theoretical computer science, McDiarmid's inequality (named after Colin McDiarmid [1]) is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

Statement edit

A function   satisfies the bounded differences property if substituting the value of the  th coordinate   changes the value of   by at most  . More formally, if there are constants   such that for all  , and all  ,

 

McDiarmid's Inequality[2] — Let   satisfy the bounded differences property with bounds  .

Consider independent random variables   where   for all  . Then, for any  ,

 
 

and as an immediate consequence,

 

Extensions edit

Unbalanced distributions edit

A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.

McDiarmid's Inequality (unbalanced)[3][4] — Let   satisfy the bounded differences property with bounds  .

Consider independent random variables   drawn from a distribution where there is a particular value   which occurs with probability  . Then, for any  ,

 

This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.

Differences bounded with high probability edit

McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.

McDiarmid's Inequality (Differences bounded with high probability)[5] — Let   be a function and   be a subset of its domain and let   be constants such that for all pairs   and  ,

 

Consider independent random variables   where   for all  . Let   and let  . Then, for any  ,

 

and as an immediate consequence,

 

There exist stronger refinements to this analysis in some distribution-dependent scenarios,[6] such as those that arise in learning theory.

Sub-Gaussian and sub-exponential norms edit

Let the  th centered conditional version of a function   be

 

so that   is a random variable depending on random values of  .

McDiarmid's Inequality (Sub-Gaussian norm)[7][8] — Let   be a function. Consider independent random variables   where   for all  .

Let   refer to the  th centered conditional version of  . Let   denote the sub-Gaussian norm of a random variable.

Then, for any  ,

 

McDiarmid's Inequality (Sub-exponential norm)[8] — Let   be a function. Consider independent random variables   where   for all  .

Let   refer to the  th centered conditional version of  . Let   denote the sub-exponential norm of a random variable.

Then, for any  ,

 

Bennett and Bernstein forms edit

Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let

 

McDiarmid's Inequality (Bennett form)[4] — Let   satisfy the bounded differences property with bounds  . Consider independent random variables   where   for all  . Let   and   be defined as at the beginning of this section.

Then, for any  ,

 

McDiarmid's Inequality (Bernstein form)[4] — Let   satisfy the bounded differences property with bounds  . Let   and   be defined as at the beginning of this section.

Then, for any  ,

 

Proof edit

The following proof of McDiarmid's inequality[2] constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality). An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.[4]

For better readability, we will introduce a notational shorthand:   will denote   for any   and integers  , so that, for example,

 

Pick any  . Then, for any  , by triangle inequality,

 

and thus   is bounded.

Since   is bounded, define the Doob martingale   (each   being a random variable depending on the random values of  ) as

 

for all   and  , so that  .

Now define the random variables for each  

 

Since   are independent of each other, conditioning on   does not affect the probabilities of the other variables, so these are equal to the expressions

 

Note that  . In addition,

 

Then, applying the general form of Azuma's inequality to  , we have

 

The one-sided bound in the other direction is obtained by applying Azuma's inequality to   and the two-sided bound follows from a union bound.  

See also edit

References edit

  1. ^ McDiarmid, Colin (1989). "On the method of bounded differences". Surveys in Combinatorics, 1989: Invited Papers at the Twelfth British Combinatorial Conference: 148–188.
  2. ^ a b Doob, J. L. (1940). "Regularity properties of certain families of chance variables" (PDF). Transactions of the American Mathematical Society. 47 (3): 455–486. doi:10.2307/1989964. JSTOR 1989964.
  3. ^ Chou, Chi-Ning; Love, Peter J.; Sandhu, Juspreet Singh; Shi, Jonathan (2022). "Limitations of Local Quantum Algorithms on Random Max-k-XOR and Beyond". 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). 229: 41:13. arXiv:2108.06049. doi:10.4230/LIPIcs.ICALP.2022.41. Retrieved 8 July 2022.
  4. ^ a b c d Ying, Yiming (2004). "McDiarmid's inequalities of Bernstein and Bennett forms" (PDF). City University of Hong Kong. Retrieved 10 July 2022.
  5. ^ Combes, Richard (2015). "An extension of McDiarmid's inequality". arXiv:1511.05240 [cs.LG].
  6. ^ Wu, Xinxing; Zhang, Junping (April 2018). "Distribution-dependent concentration inequalities for tighter generalization bounds". Science China Information Sciences. 61 (4): 048105:1–048105:3. arXiv:1607.05506. doi:10.1007/s11432-017-9225-2. S2CID 255199895. Retrieved 10 July 2022.
  7. ^ Kontorovich, Aryeh (22 June 2014). "Concentration in unbounded metric spaces and algorithmic stability". Proceedings of the 31st International Conference on Machine Learning. 32 (2): 28–36. arXiv:1309.1007. Retrieved 10 July 2022.
  8. ^ a b Maurer, Andreas; Pontil, Pontil (2021). "Concentration inequalities under sub-Gaussian and sub-exponential conditions" (PDF). Advances in Neural Information Processing Systems. 34: 7588–7597. Retrieved 10 July 2022.