# Complex random vector

In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If ${\displaystyle Z_{1},\ldots ,Z_{n}}$ are complex-valued random variables, then the n-tuple ${\displaystyle \left(Z_{1},\ldots ,Z_{n}\right)}$ is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.

Some concepts of real random vectors have a straightforward generalization to complex random vectors. For example, the definition of the mean of a complex random vector. Other concepts are unique to complex random vectors.

Applications of complex random vectors are found in digital signal processing.

## Definition

A complex random vector ${\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{T}}$  on the probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)}$  is a function ${\displaystyle \mathbf {Z} \colon \Omega \rightarrow \mathbb {C} ^{n}}$  such that the vector ${\displaystyle (\Re {(Z_{1})},\Im {(Z_{1})},\ldots ,\Re {(Z_{n})},\Im {(Z_{n})})^{T}}$  is a real real random vector on ${\displaystyle (\Omega ,{\mathcal {F}},P)}$  where ${\displaystyle \Re {(z)}}$  denotes the real part of ${\displaystyle z}$  and ${\displaystyle \Im {(z)}}$  denotes the imaginary part of ${\displaystyle z}$ .[1]:p. 292

## Cumulative distribution function

The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form ${\displaystyle P(Z\leq 1+3i)}$  make no sense. However expressions of the form ${\displaystyle P(\Re {(Z)}\leq 1,\Im {(Z)}\leq 3)}$  make sense. Therefore, the cumulative distribution function ${\displaystyle F_{\mathbf {Z} }:\mathbb {C} ^{n}\mapsto [0,1]}$  of a random vector ${\displaystyle \mathbf {Z} =(Z_{1},...,Z_{n})^{T}}$  is defined as

${\displaystyle F_{\mathbf {Z} }(\mathbf {z} )=\operatorname {P} (\Re {(Z_{1})}\leq \Re {(z_{1})},\Im {(Z_{1})}\leq \Im {(z_{1})},\ldots ,\Re {(Z_{n})}\leq \Re {(z_{n})},\Im {(Z_{n})}\leq \Im {(z_{n})})}$

(Eq.1)

where ${\displaystyle \mathbf {z} =(z_{1},...,z_{n})^{T}}$ .

## Expectation

As in the real case the expectation (also called expected value) of a complex random vector is taken component-wise.[1]:p. 293

${\displaystyle \operatorname {E} [\mathbf {Z} ]=(\operatorname {E} [Z_{1}],\ldots ,\operatorname {E} [Z_{n}])^{T}}$

(Eq.2)

## Covariance matrix and pseudo-covariance matrix

### Definitions

The covariance matrix (also called second central moment) ${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }}$  contains the covariances between all pairs of components. The covariance matrix of an ${\displaystyle n\times 1}$  random vector is an ${\displaystyle n\times n}$  matrix whose ${\displaystyle (i,j)}$ th element is the covariance between the i th and the j th random variables.[2]:p.372 Unlike in the case of real random variables, the covariance between two random variables involves the complex conjugate of one of the two. Thus the covariance matrix is a Hermitian matrix.[1]:p. 293

{\displaystyle {\begin{aligned}&\operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,\mathbf {Z} ]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])}^{H}]=\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{H}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ^{H}]\\[12pt]\end{aligned}}}

(Eq.3)

${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{1}-\operatorname {E} [Z_{1}])}}]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{2}-\operatorname {E} [Z_{2}])}}]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(Z_{n}-\operatorname {E} [Z_{n}])}}]\end{bmatrix}}}$

The pseudo-covariance matrix (also called relation matrix) is defined as follows. In contrast to the covariance matrix defined above Hermitian transposition gets replaced by transposition in the definition.

${\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,{\overline {\mathbf {Z} }}]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])}^{T}]=\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{T}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ^{T}]}$

(Eq.4)

${\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(Z_{n}-\operatorname {E} [Z_{n}])]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(Z_{n}-\operatorname {E} [Z_{n}])]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{1}-\operatorname {E} [Z_{1}])]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{2}-\operatorname {E} [Z_{2}])]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(Z_{n}-\operatorname {E} [Z_{n}])]\end{bmatrix}}}$

### Properties

The covariance matrix is a hermitian matrix, i.e.[1]:p. 293

${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }^{H}=\operatorname {K} _{\mathbf {Z} \mathbf {Z} }}$ .

The pseudo-covariance matrix is a symmetric matrix, i.e.

${\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }^{T}=\operatorname {J} _{\mathbf {Z} \mathbf {Z} }}$ .

The covariance matrix is a positive semidefinite matrix, i.e.

${\displaystyle \mathbf {a} ^{H}\operatorname {K} _{\mathbf {Z} \mathbf {Z} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {C} ^{n}}$ .

### Covariance matrices of real and imaginary parts

By decomposing the random vector ${\displaystyle \mathbf {Z} }$  into its real part ${\displaystyle \mathbf {X} =\Re {(\mathbf {Z} )}}$  and imaginary part ${\displaystyle \mathbf {Y} =\Im {(\mathbf {Z} )}}$  (i.e. ${\displaystyle \mathbf {Z} =\mathbf {X} +i\mathbf {Y} }$ ), the matrices ${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }}$  and ${\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }}$  can be related to the covariance matrices of ${\displaystyle \mathbf {X} }$  and ${\displaystyle \mathbf {Y} }$  via the following expressions:

{\displaystyle {\begin{aligned}&\operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Re} (\operatorname {K} _{\mathbf {Z} \mathbf {Z} }+\operatorname {J} _{\mathbf {Z} \mathbf {Z} })\\&\operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Im} (-\operatorname {K} _{\mathbf {Z} \mathbf {Z} }+\operatorname {J} _{\mathbf {Z} \mathbf {Z} })\\&\operatorname {K} _{\mathbf {Y} \mathbf {X} }=\operatorname {E} [(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Im} (\operatorname {K} _{\mathbf {Z} \mathbf {Z} }+\operatorname {J} _{\mathbf {Z} \mathbf {Z} })\\&\operatorname {K} _{\mathbf {Y} \mathbf {Y} }=\operatorname {E} [(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Re} (\operatorname {K} _{\mathbf {Z} \mathbf {Z} }-\operatorname {J} _{\mathbf {Z} \mathbf {Z} })\end{aligned}}}

and conversely

{\displaystyle {\begin{aligned}&\operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }+\operatorname {K} _{\mathbf {Y} \mathbf {Y} }+i(\operatorname {K} _{\mathbf {Y} \mathbf {X} }-\operatorname {K} _{\mathbf {X} \mathbf {Y} })\\&\operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }-\operatorname {K} _{\mathbf {Y} \mathbf {Y} }+i(\operatorname {K} _{\mathbf {Y} \mathbf {X} }+\operatorname {K} _{\mathbf {X} \mathbf {Y} })\end{aligned}}}

## Cross-covariance matrix and pseudo-cross-covariance matrix

### Definitions

The cross-covariance matrix between two complex random vectors ${\displaystyle \mathbf {Z} ,\mathbf {W} }$  is defined as:

${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {cov} [\mathbf {Z} ,\mathbf {W} ]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{H}]=\operatorname {E} [\mathbf {Z} \mathbf {W} ^{H}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ^{H}]}$

(Eq.5)

${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{1}-\operatorname {E} [W_{1}])}}]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{2}-\operatorname {E} [W_{2}])}}]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}]){\overline {(W_{n}-\operatorname {E} [W_{n}])}}]\end{bmatrix}}}$

And the pseudo-cross-covariance matrix is defined as:

${\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {W} }=\operatorname {cov} [\mathbf {Z} ,{\overline {\mathbf {W} }}]=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ]){(\mathbf {W} -\operatorname {E} [\mathbf {W} ])}^{T}]=\operatorname {E} [\mathbf {Z} \mathbf {W} ^{T}]-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ^{T}]}$

(Eq.6)

${\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {W} }={\begin{bmatrix}\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{1}-\operatorname {E} [Z_{1}])(W_{n}-\operatorname {E} [W_{n}])]\\\\\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{2}-\operatorname {E} [Z_{2}])(W_{n}-\operatorname {E} [W_{n}])]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{1}-\operatorname {E} [W_{1}])]&\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{2}-\operatorname {E} [W_{2}])]&\cdots &\mathrm {E} [(Z_{n}-\operatorname {E} [Z_{n}])(W_{n}-\operatorname {E} [W_{n}])]\end{bmatrix}}}$

### Uncorrelatedness

Two complex random vectors ${\displaystyle \mathbf {Z} }$  and ${\displaystyle \mathbf {W} }$  are called uncorrelated if

${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {J} _{\mathbf {Z} \mathbf {W} }=0}$ .

## Independence

Two complex random vectors ${\displaystyle \mathbf {Z} =(Z_{1},...,Z_{m})^{T}}$  and ${\displaystyle \mathbf {W} =(W_{1},...,W_{n})^{T}}$  are called independent if

${\displaystyle F_{\mathbf {Z,W} }(\mathbf {z,w} )=F_{\mathbf {Z} }(\mathbf {z} )\cdot F_{\mathbf {W} }(\mathbf {w} )\quad {\text{for all }}\mathbf {z} ,\mathbf {w} }$

(Eq.7)

where ${\displaystyle F_{\mathbf {Z} }(\mathbf {z} )}$  and ${\displaystyle F_{\mathbf {W} }(\mathbf {w} )}$  denote the cumulative distribution functions of ${\displaystyle \mathbf {Z} }$  and ${\displaystyle \mathbf {W} }$  as defined in Eq.1 and ${\displaystyle F_{\mathbf {Z,W} }(\mathbf {z,w} )}$  denotes their joint cumulative distribution function. Independence of ${\displaystyle \mathbf {Z} }$  and ${\displaystyle \mathbf {W} }$  is often denoted by ${\displaystyle \mathbf {Z} \perp \!\!\!\perp \mathbf {W} }$ . Written component-wise, ${\displaystyle \mathbf {Z} }$  and ${\displaystyle \mathbf {W} }$  are called independent if

${\displaystyle F_{Z_{1},\ldots ,Z_{m},W_{1},\ldots ,W_{n}}(z_{1},\ldots ,z_{m},w_{1},\ldots ,w_{n})=F_{Z_{1},\ldots ,Z_{m}}(z_{1},\ldots ,z_{m})\cdot F_{W_{1},\ldots ,W_{n}}(w_{1},\ldots ,w_{n})\quad {\text{for all }}z_{1},\ldots ,z_{m},w_{1},\ldots ,w_{n}}$ .

## Circular symmetry

### Definition

A complex random vector ${\displaystyle \mathbf {Z} }$  is called circularly symmetric if for every deterministic ${\displaystyle \varphi \in [-\pi ,\pi )}$  the distribution of ${\displaystyle e^{\mathrm {i} \varphi }\mathbf {Z} }$  equals the distribution of ${\displaystyle \mathbf {Z} }$ .[3]:pp. 500–501

### Properties

• The expectation of a circularly symmetric complex random vector is either zero or it is not defined.[3]:p. 500
• The pseudo-covariance matrix of a circularly symmetric complex random vector is zero.[3]:p. 584

## Proper complex random vectors

### Definition

A complex random vector ${\displaystyle \mathbf {Z} }$  is called proper if the following three conditions are all satisfied:[1]:p. 293

• ${\displaystyle \operatorname {E} [\mathbf {Z} ]=0}$  (zero mean)
• ${\displaystyle \operatorname {var} [Z_{1}]<\infty ,\ldots ,\operatorname {var} [Z_{n}]<\infty }$  (all components have finite variance)
• ${\displaystyle \operatorname {E} [\mathbf {Z} \mathbf {Z} ^{T}]=0}$

Two complex random vectors ${\displaystyle \mathbf {Z} ,\mathbf {W} }$  are called jointly proper is the composite random vector ${\displaystyle (Z_{1},Z_{2},\ldots ,Z_{m},W_{1},W_{2},\ldots ,W_{n})^{T}}$  is proper.

### Properties

• A complex random vector ${\displaystyle \mathbf {Z} }$  is proper if, and only if, for all (deterministic) vectors ${\displaystyle \mathbf {c} \in \mathbb {C} ^{n}}$  the complex random variable ${\displaystyle \mathbf {c} ^{T}\mathbf {Z} }$  is proper.[1]:p. 293
• Linear transformations of proper complex random vectors are proper, i.e. if ${\displaystyle \mathbf {Z} }$  is a proper random vectors with ${\displaystyle n}$  components and ${\displaystyle A}$  is a deterministic ${\displaystyle m\times n}$  matrix, then the complex random vector ${\displaystyle A\mathbf {Z} }$  is also proper.[1]:p. 295
• Every circularly symmetric complex random vector with finite variance of all its components is proper.[1]:p. 295
• There are proper complex random vectors that are not circularly symmetric.[1]:p. 504
• A real random vector is proper if and only if it is constant.
• Two jointly proper complex random vectors are uncorrelated if and only if their covariace matrix is zero, i.e. if ${\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=0}$ .

## Cauchy-Schwarz inequality

The Cauchy-Schwarz inequality for complex random vectors is

${\displaystyle \left|\operatorname {E} [\mathbf {Z} ^{H}\mathbf {W} ]\right|^{2}\leq \operatorname {E} [\mathbf {Z} ^{H}\mathbf {Z} ]\operatorname {E} [|\mathbf {W} ^{H}\mathbf {W} |]}$ .

## Characteristic function

The characteristic function of a complex random vector ${\displaystyle \mathbf {Z} }$  with ${\displaystyle n}$  components is a function ${\displaystyle \mathbb {C} ^{n}\to \mathbb {C} }$  defined by:[1]:p. 295

${\displaystyle \varphi _{\mathbf {Z} }(\mathbf {\omega } )=\operatorname {E} \left[e^{i\Re {(\mathbf {\omega } ^{H}\mathbf {Z} )}}\right]=\operatorname {E} \left[e^{i(\Re {(\omega _{1})}\Re {(Z_{1})}+\Im {(\omega _{1})}\Im {(Z_{1})}+\cdots +\Re {(\omega _{n})}\Re {(Z_{n})}+\Im {(\omega _{n})}\Im {(Z_{n})})}\right]}$