# Lovász number

In graph theory, the Lovász number of a graph is a real number that is an upper bound on the Shannon capacity of the graph. It is also known as Lovász theta function and is commonly denoted by ϑ(G). This quantity was first introduced by László Lovász in his 1979 paper On the Shannon Capacity of a Graph.[1]

## Definition

Let G = (VE) be a graph on n vertices. An ordered set of n unit vectors U = (ui | i ∈ V) ⊂ RN is called an orthonormal representation of G in RN, if ui and uj are orthogonal whenever vertices i and j are not adjacent in G:

$u_i^\mathrm{T} u_j = \begin{cases} 1, & \mbox{if }i = j, \\ 0, & \mbox{if }ij \notin E. \end{cases}$

Clearly, every graph admits an orthonormal representation with N = n (just represent vertices by distinct vectors from the standard basis of Rn), but in general it might be possible to take N considerably smaller than the number of vertices n.

The Lovász number ϑ of graph G is defined as follows:

$\vartheta(G) = \min_{c, U} \max_{i \in V} \frac{1}{(c^\mathrm{T} u_i)^2},$

where c is a unit vector in RN and U is an orthonormal representation of G in RN. Here minimization implicitly is performed also over the dimension N, however without loss of generality it suffices to consider N = n.[2] Intuitively, this corresponds to minimizing the half-angle of a rotational cone containing all representing vectors of an orthonormal representation of G. If the optimal angle is ϕ, then ϑ(G) = 1/cos2(ϕ) and c corresponds to the symmetry axis of the cone.[3]

↑Jump back a section

## Equivalent expressions

Let G = (VE) be a graph on n vertices. Let A range over all n × n symmetric matrices such that aij = 1 whenever i = j or ij ∉ E, and let λmax(A) denote the largest eigenvalue of A. Then an alternative way of computing the Lovász number of G is as follows:[4]

$\vartheta(G) = \min_A \lambda_\text{max}(A).$

The following method is dual to the previous one. Let B range over all n × n symmetric positive semidefinite matrices such that bij = 0 for every ij ∈ E and Tr(B) = 1. Here Tr denotes trace (the sum of diagonal entries) and J is the n × n matrix of ones. Then[5]

$\vartheta(G) = \max_B \operatorname{Tr}(BJ).$

Note that Tr(BJ) is just the sum of all entries of B.

The Lovász number can be computed also in terms of the complement graph G. Let d be a unit vector and V = (vi | iV) be an orthonormal representation of G. Then[6]

$\vartheta(G) = \max_{d,V} \sum_{i \in V} (d^\mathrm{T} v_i)^2.$
↑Jump back a section

## Value of ϑ for some well-known graphs

Graph Value of ϑ[7]
Complete graph $\vartheta(K_n) = 1$
Empty graph $\vartheta(\bar{K}_n) = n$
Pentagon graph $\vartheta(C_5) = \sqrt{5}$
Cycle graphs $\vartheta(C_n) = \begin{cases} \frac{n \cos(\pi/n)}{1 + \cos(\pi/n)} & \text{for odd } n, \\ \frac{n}{2} & \text{for even } n \end{cases}$
Petersen graph $\vartheta(KG_{5,2}) = 4$
Kneser graphs $\vartheta(KG_{n,k}) = \binom{n-1}{k-1}$
Complete multipartite graphs $\vartheta(K_{n_1,\dots,n_k}) = \max_{1 \leq i \leq k} n_i$
↑Jump back a section

## Properties

If G ⊠ H denotes the strong graph product of graphs G and H, then[8]

$\vartheta(G \boxtimes H) = \vartheta(G) \vartheta(H).$

If G is the complement of G, then[9]

$\vartheta(G) \vartheta(\bar{G}) \geq n.$
↑Jump back a section

## Lovász "sandwich theorem"

The Lovász "sandwich theorem" states that the Lovász number always lies between two other numbers that are NP-complete to compute.[10] More precisely,

$\omega(G) \leq \vartheta(\bar{G}) \leq \chi(G),$

where ω(G) is the clique number of G (the size of the largest clique) and χ(G) is the chromatic number of G (the smallest number of colors needed to color the vertices of G so that no two adjacent vertices receive the same color). However, the value of ϑ(G) can be approximated by the ellipsoid method in time bounded by a polynomial in the number of vertices of G.[11]

↑Jump back a section

## Applications

### Shannon capacity of a graph

Shannon capacity of graph G is defined as follows:

$\Theta(G) = \sup_k \sqrt[k]{\alpha(G^k)} = \lim_{k \rightarrow \infty} \sqrt[k]{\alpha(G^k)},$

where α(G) is the independence number of graph G (the size of a largest independent set of G) and Gk is the strong graph product of G with itself k times. Clearly, Θ(G) ≥ α(G). However, the Lovász number provides an upper bound on the Shannon capacity of graph,[12] hence

$\alpha(G) \leq \Theta(G) \leq \vartheta(G).$

#### Example (pentagon)

Pentagon graph

Let the confusability graph of the channel be C5, a pentagon. Since the original paper due Shannon (1956) it was an open problem to determine the value of Θ(C5). It was first established by Lovász (1979) that Θ(C5) = 5.

Clearly, Θ(C5) ≥ α(C5) = 2. However, α(C52) ≥ 5, since "11", "23", "35", "54", "42" are five mutually non-confusable messages, thus Θ(C5) ≥ 5.

To show that this bound is tight, let U = (u1, ..., u5) be the following orthonormal representation of the pentagon:

$u_k = \begin{pmatrix} \cos{\theta} \\ \sin{\theta} \cos{\varphi_k} \\ \sin{\theta} \sin{\varphi_k} \end{pmatrix}, \quad \cos{\theta} = \frac{1}{\sqrt[4]{5}}, \quad \varphi_k = \frac{2 \pi k}{5}$

and let c = (1, 0, 0). By using this choice in the initial definition of Lovász number, we get ϑ(C5) ≤ 5. Hence, Θ(C5) = 5.

↑Jump back a section

## Haemers bound

Haemers has found graphs for which Θ(G) < ϑ(G), i.e., Shannon capacity is strictly smaller than the Lovász number.[13] This means that in general Lovász result is not tight. Haemers has also provided an alternative bound, which is sometimes better than Lovász bound:[14]

$\Theta(G) \leq R(G) = \min_{B} \operatorname{rank}(B),$

where B is an n × n matrix over some field, such that bii ≠ 0 and bij = 0 if vertices i and j are not[15] adjacent.

↑Jump back a section

## Generalizations

The Lovász number has been generalized for "non-commutative graphs" in the context of quantum communication.[16]

↑Jump back a section

## Notes

1. ^
2. ^ If N > n then one can always achieve a smaller objective value by restricting c to the subspace spanned by vectors ui which is at most n-dimensional.
3. ^ See Proposition 5.1 in Lovász (1995–2001), pp. 28.
4. ^ See Theorem 3 in Lovász (1979).
5. ^ See Theorem 4 in Lovász (1979).
6. ^ See Theorem 5 in Lovász (1979).
7. ^
8. ^ See Lemma 2 and Theorem 7 in Lovász (1979).
9. ^ See Corollary 2 in Lovász (1979).
10. ^
11. ^
12. ^ See Theorem 1 in Lovász (1979).
13. ^
14. ^
15. ^ There is a typo in the definition that appears in Haemers (1978).
16. ^
↑Jump back a section

## References

↑Jump back a section