Hilbert transform

In mathematics and in signal processing, the Hilbert transform is a linear operator which takes a function, u(t), and produces a function, H(u)(t), with the same domain. The Hilbert transform is named after David Hilbert, who first introduced the operator in order to solve a special case of the Riemann–Hilbert problem for holomorphic functions. It is a basic tool in Fourier analysis, and provides a concrete means for realizing the harmonic conjugate of a given function or Fourier series. Furthermore, in harmonic analysis, it is an example of a singular integral operator, and of a Fourier multiplier. The Hilbert transform is also important in the field of signal processing where it is used to derive the analytic representation of a signal u(t).

The Hilbert transform was originally defined for periodic functions, or equivalently for functions on the circle, in which case it is given by convolution with the Hilbert kernel. More commonly, however, the Hilbert transform refers to a convolution with the Cauchy kernel, for functions defined on the real line R (the boundary of the upper half-plane). The Hilbert transform is closely related to the Paley–Wiener theorem, another result relating holomorphic functions in the upper half-plane and Fourier transforms of functions on the real line.

The Hilbert transform, in red, of a square wave, in blue

Introduction

The Hilbert transform of u can be thought of as the convolution of u(t) with the function h(t) = 1/(πt). Because h(t) is not integrable the integrals defining the convolution do not converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by p.v.) Explicitly, the Hilbert transform of a function (or signal) u(t) is given by

$H(u)(t) = \text{p.v.} \int_{-\infty}^{\infty}u(\tau) h(t-\tau)\, d\tau =\frac{1}{\pi} \ \text{p.v.} \int_{-\infty}^{\infty} \frac{u(\tau)}{t-\tau}\, d\tau,$

provided this integral exists as a principal value. This is precisely the convolution of u with the tempered distribution p.v. 1/πt (due to Schwartz (1950); see Pandey (1996, Chapter 3)). Alternatively, by changing variables, the principal value integral can be written explicitly (Zygmund 1968, §XVI.1) as

$H(u)(t) = -\frac{1}{\pi}\lim_{\varepsilon\downarrow 0}\int_{\varepsilon}^\infty \frac{u(t+\tau)-u(t-\tau)}{\tau}\,d\tau.$

When the Hilbert transform is applied twice in succession to a function u, the result is negative u:

$H(H(u))(t) = -u(t),\,$

provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is −H. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of u(t) (see Relationship with the Fourier transform, below).

For an analytic function in upper half-plane the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if f(z) is analytic in the plane Im z > 0 and u(t) = Re f(t + 0·i ) then Im f(t + 0·i ) = H(u)(t) up to an additive constant, provided this Hilbert transform exists.

Notation

In signal processing the Hilbert transform of u(t) is commonly denoted by $\widehat u(t).\,$ However, in mathematics, this notation is already extensively used to denote the Fourier transform of u(t). Occasionally, the Hilbert transform may be denoted by $\tilde{u}(t)$. Furthermore, many sources define the Hilbert transform as the negative of the one defined here.

↑Jump back a section

History

The Hilbert transform arose in Hilbert's 1905 work on a problem posed by Riemann concerning analytic functions (Kress (1989); Bitsadze (2001)) which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle (Khvedelidze 2001; Hilbert 1953). Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation (Hardy, Littlewood & Polya 1952, §9.1). Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case (Hardy, Littlewood & Polya 1952, §9.2). These results were restricted to the spaces L2 and ℓ2. In 1928, Marcel Riesz proved that the Hilbert transform can be defined for u in Lp(R) for 1 ≤ p ≤ ∞, that the Hilbert transform is a bounded operator on Lp(R) for the same range of p, and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform (Riesz 1928). The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals (Calderón & Zygmund 1952). Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today.

↑Jump back a section

Relationship with the Fourier transform

The Hilbert transform is a multiplier operator (Duoandikoetxea 2000, Chapter 3). The symbol of H is σH(ω) = −i sgn(ω) where sgn is the signum function. Therefore:

$\mathcal{F}(H(u))(\omega) = (-i\,\operatorname{sgn}(\omega))\cdot \mathcal{F}(u)(\omega)\,$

where $\mathcal{F}$ denotes the Fourier transform. Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of $\mathcal{F}.$

$\sigma_H(\omega) \, \ =\ \begin{cases} \ \ i = e^{+i\pi/2}, & \mbox{for } \omega < 0\\ \ \ \ \ 0, & \mbox{for } \omega = 0\\ \ \ -i = e^{-i\pi/2}, & \mbox{for } \omega > 0. \end{cases}$

Therefore H(u)(t) has the effect of shifting the phase of the negative frequency components of u(t) by +90° (π/2 radians) and the phase of the positive frequency components by −90°. And i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation.

When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of u(t) are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated, i.e., H(H(u)) = −u, because:

$\big(\sigma_H(\omega)\big)^2 = e^{\pm i\pi} = -1 \qquad \text{for } \omega\neq 0.$
↑Jump back a section

Table of selected Hilbert transforms

Signal
$u(t)\,$
Hilbert transform[fn 1]
$H(u)(t)\,$
$\sin(t)\,$[fn 2] $-\cos(t)\,$
$\cos(t)\,$[fn 2] $\sin(t)\,$
$\exp \left( i t \right)$ $- i \exp \left( i t \right)$
$\exp \left( -i t \right)$ $i \exp \left( -i t \right)$
$1 \over t^2 + 1$ $t \over t^2 + 1$
Sinc function
$\sin(t) \over t$
$1- \cos(t)\over t$
Rectangular function
$\sqcap(t)$
${1 \over \pi} \log \left | {t+{1 \over 2} \over t-{1 \over 2}} \right |$
Dirac delta function
$\delta(t) \,$
${1 \over \pi t}$
Characteristic Function
$\chi_{[a,b]}(t) \,$
$\frac{1}{\pi}\log \left \vert \frac{t-a}{t-b}\right \vert \,$
Notes
1. ^ Some authors, e.g., Bracewell, use our −H as their definition of the forward transform. A consequence is that the right column of this table would be negated.
2. ^ a b The Hilbert transform of the sin and cos functions can be defined in a distributional sense, if there is a concern that the integral defining them is otherwise conditionally convergent. In the periodic setting this result holds without any difficulty.

An extensive table of Hilbert transforms is available (King 2009). Note that the Hilbert transform of a constant is zero.

↑Jump back a section

Domain of definition

It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in Lp(R) for 1< p <∞.

More precisely, if u is in Lp(R) for 1<p<∞, then the limit defining the improper integral

$H(u)(t) = -\frac{1}{\pi}\lim_{\epsilon\downarrow 0}\int_\epsilon^\infty \frac{u(t+\tau)-u(t-\tau)}{\tau}\,d\tau$

exists for almost every t. The limit function is also in Lp(R), and is in fact the limit in the mean of the improper integral as well. That is,

$-\frac{1}{\pi}\int_\epsilon^\infty \frac{u(t+\tau)-u(t-\tau)}{\tau}\,d\tau\to H(u)(t)$

as ε→0 in the Lp-norm, as well as pointwise almost everywhere, by the Titchmarsh theorem (Titchmarsh 1948, Chapter 5).

In the case p=1, the Hilbert transform still converges pointwise almost everywhere, but may fail to be itself integrable even locally (Titchmarsh 1948, §5.14). In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an L1 function does converge, however, in L1-weak, and the Hilbert transform is a bounded operator from L1 to L1,w (Stein & Weiss 1971, Lemma V.2.8). (In particular, since the Hilbert transform is also a multiplier operator on L2, Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that H is bounded on Lp.)

↑Jump back a section

Properties

Boundedness

If 1<p<∞, then the Hilbert transform on Lp(R) is a bounded linear operator, meaning that there exists a constant Cp such that

$\|Hu\|_p \le C_p\| u\|_p$

for all uLp(R). This theorem is due to Riesz (1928, VII); see also Titchmarsh (1948, Theorem 101). The best constant Cp is given by

$C_p=\begin{cases}\tan \frac{\pi}{2p} & \text{for } 1 < p\leq 2\\ \cot\frac{\pi}{2p} & \text{for } 2

This result is due to (Pichorides 1972); see also Grafakos (2004, Remark 4.1.8). The same best constants hold for the periodic Hilbert transform.

The boundedness of the Hilbert transform implies the Lp(R) convergence of the symmetric partial sum operator

$S_R f = \int_{-R}^{R}\hat{f}({\xi})e^{2\pi i x\xi}\,d\xi$

to f in Lp(R), see for example (Duoandikoetxea 2000, p. 59).

The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between Lp(R) and the dual space Lq(R), where p and q are Hölder conjugates and 1 < p,q < ∞. Symbolically,

$\langle Hu, v\rangle = \langle u, -Hv\rangle$

for u ∈ Lp(R) and v ∈ Lq(R) (Titchmarsh 1948, Theorem 102).

Inverse transform

The Hilbert transform is an anti-involution (Titchmarsh 1948, p. 120), meaning that

$H(H(u)) = -u\,$

provided each transform is well-defined. Since H preserves the space Lp(R), this implies in particular that the Hilbert transform is invertible on Lp(R), and that

$H^{-1} = -H.\,$

Differentiation

Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute:

$H\left(\frac{du}{dt}\right) = \frac{d}{dt}H(u).$

Iterating this identity,

$H\left(\frac{d^ku}{dt^k}\right) = \frac{d^k}{dt^k}H(u).$

This is rigorously true as stated provided u and its first k derivatives belong to Lp(R) (Pandey 1996, §3.3). One can check this easily in the frequency domain, where differentiation becomes multiplication by ω.

Convolutions

The Hilbert transform can formally be realized as a convolution with the tempered distribution (Duistermaat & Kolk 2010, p. 211)

$h(t) = \text{p.v. }\frac{1}{\pi t}.$

Thus formally,

$H(u) = h*u.\,$

However, a priori this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions a fortiori) are dense in Lp. Alternatively, one may use the fact that h(t) is the distributional derivative of the function log|t|/π; to wit

$H(u)(t) = \frac{d}{dt}\left(\frac{1}{\pi} (u*\log|\cdot|)(t)\right).$

For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform on either factor:

$H(u*v) = H(u)*v = u*H(v).\,$

This is rigorously true if u and v are compactly supported distributions since, in that case,

$h*(u*v) = (h*u)*v = u*(h*v).\,$

By passing to an appropriate limit, it is thus also true if u ∈ Lp and v ∈ Lr provided

$1 < \frac{1}{p} + \frac{1}{r},$

a theorem due to Titchmarsh (1948, Theorem 104).

Invariance

The Hilbert transform has the following invariance properties on L2(R).

• It commutes with translations. That is, it commutes with the operators Taƒ(x) = ƒ(x + a) for all a in R
• It commutes with positive dilations. That is it commutes with the operators Mλƒ(x) = ƒ(λx) for all λ > 0.
• It anticommutes with the reflection Rƒ(x) = ƒ(−x).

Up to a multiplicative constant, the Hilbert transform is the only bounded operator on L2 with these properties (Stein 1970, §III.1).

In fact there is a larger group of operators commuting with the Hilbert transform. The group SL(2,R) acts by unitary operators Ug on the space L2(R) by the formula

$\displaystyle{U_{g}^{-1}f(x) =(cx+d)^{-1} f\left({ax+b\over cx +d}\right),\,\,\,g=\begin{pmatrix} a & b \\ c & d \end{pmatrix}.}$

This unitary representation is an example of a principal series representation of SL(2,R). In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space H2(R) and its conjugate. These are the spaces of L2 boundary values of holomorphic functions on the upper and lower halfplanes. H2(R) and its conjugate consist of exactly those L2 functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to H = -i (2P - I), with P being the orthogonal projection from L2(R) onto H2(R), it follows that H2(R) and its orthogonal are eigenspaces of H for the eigenvalues ± i. In other words H commutes with the operators Ug. The restrictions of the operators Ug to H2(R) and its conjugate give irreducible representations of SL(2,R)—the so-called limit of discrete series representations.[1]

↑Jump back a section

Extending the domain of definition

Hilbert transform of distributions

It is further possible to extend the Hilbert transform to certain spaces of distributions (Pandey 1996, Chapter 3). Since the Hilbert transform commutes with differentiation, and is a bounded operator on Lp, H restricts to give a continuous transform on the inverse limit of Sobolev spaces:

$\mathcal{D}_{L^p} = \underset{n\to\infty}{\underset{\longleftarrow}{\lim}} W^{n,p}(\mathbb{R}).$

The Hilbert transform can then be defined on the dual space of $\mathcal{D}_{L^p}$, denoted $\mathcal{D}_{L^p}'$, consisting of Lp distributions. This is accomplished by the duality pairing: for $u\in \mathcal{D}'_{L^p}$, define $H(u)\in \mathcal{D}'_{L^p}$ by

$\langle Hu,v\rangle \overset{\mathrm{def}}{=} \langle u, -Hv\rangle$

for all $v\in\mathcal{D}_{L^p}$.

It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand & Shilov (1967)[page needed], but considerably more care is needed because of the singularity in the integral.

Hilbert transform of bounded functions

The Hilbert transform can be defined for functions in L(R) as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps L(R) to the Banach space of bounded mean oscillation (BMO) classes.

Interpreted naively, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with u = sgn(x), the integral defining H(u) diverges almost everywhere to ±∞. To alleviate such difficulties, the Hilbert transform of an L-function is therefore defined by the following regularized form of the integral

$H(u)(t) = \text{p.v.} \int_{-\infty}^\infty u(\tau)\left\{h(t-\tau)- h_0(-\tau)\right\}\,d\tau$

where as above h(x) = 1/πx and

$h_0(x) = \begin{cases} 0&\mathrm{if\ }|x|<1\\ \frac{1}{\pi x} &\mathrm{otherwise} \end{cases}$

The modified transform H agrees with the original transform on functions of compact support by a general result of Calderón & Zygmund (1952); see Fefferman (1971). The resulting integral, furthermore, converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation.

A deep result of Fefferman (1971) and Fefferman & Stein (1972) is that a function is of bounded mean oscillation if and only if it has the form ƒ + H(g) for some ƒ, g ∈ L(R).

↑Jump back a section

Conjugate functions

The Hilbert transform can be understood in terms of a pair of functions f(x) and g(x) such that the function

$F(x) = f(x) + ig(x)$

is the boundary value of a holomorphic function F(z) in the upper half-plane (Titchmarsh 1948, Chapter V). Under these circumstances, if f and g are sufficiently integrable, then one is the Hilbert transform of the other.

Suppose that f ∈ Lp(R). Then, by the theory of the Poisson integral, f admits a unique harmonic extension into the upper half-plane, and this extension is given by

$u(x+iy) = u(x,y) = \frac{1}{\pi}\int_{-\infty}^\infty f(s)\frac{y}{(x-s)^2+y^2}\,ds$

which is the convolution of f with the Poisson kernel

$P(x,y) = \frac{1}{\pi}\frac{y}{x^2+y^2}.$

Furthermore, there is a unique harmonic function v defined in the upper half-plane such that F(z) = u(z) + iv(z) is holomorphic and

$\lim_{y\to\infty} v(x+iy) = 0.$

This harmonic function is obtained from f by taking a convolution with the conjugate Poisson kernel

$Q(x,y) = \frac{1}{\pi}\frac{x}{x^2+y^2}.$

Thus

$v(x,y) = \frac{1}{\pi}\int_{-\infty}^\infty f(s)\frac{x-s}{(x-s)^2+y^2}\,ds.$

Indeed, the real and imaginary parts of the Cauchy kernel are

$\frac{i}{\pi z} = P(x,y) + iQ(x,y),$

so that F = u + iv is holomorphic by the Cauchy integral theorem.

The function v obtained from u in this way is called the harmonic conjugate of u. The (non-tangential) boundary limit of v(x,y) as y → 0 is the Hilbert transform of f. Thus, succinctly,

$H(f) = \lim_{y\to 0} Q(-,y)\star f.$

Titchmarsh's theorem

A theorem due to Edward Charles Titchmarsh makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform (Titchmarsh 1948, Theorem 95). It gives necessary and sufficient conditions for a complex-valued square-integrable function F(x) on the real line to be the boundary value of a function in the Hardy space H2(U) of holomorphic functions in the upper half-plane U.

The theorem states that the following conditions for a complex-valued square-integrable function F : RC are equivalent:

• F(x) is the limit as z → x of a holomorphic function F(z) in the upper half-plane such that
$\int_{-\infty}^\infty |F(x+iy)|^2\,dx < K.$
• −Im(F) is the Hilbert transform of Re(F), where Re(F) and Im(F) are real-valued functions with F = Re(F) + i Im(F).
• The Fourier transform $\mathcal{F}(F)(x)$ vanishes for x < 0.

A weaker result is true for functions of class Lp for p > 1 (Titchmarsh 1948, Theorem 103). Specifically, if F(z) is a holomorphic function such that

$\int_{-\infty}^\infty |F(x+iy)|^p\,dx < K$

for all y, then there is a complex-valued function F(x) in Lp(R) such that F(x + iy) → F(x) in the Lp norm as y → 0 (as well as holding pointwise almost everywhere). Furthermore,

$F(x) = f(x) - i g(x)\,$

where ƒ is a real-valued function in Lp(R) and g is the Hilbert transform (of class Lp) of ƒ.

This is not true in the case p = 1. In fact, the Hilbert transform of an L1 function ƒ need not converge in the mean to another L1 function. Nevertheless (Titchmarsh 1948, Theorem 105), the Hilbert transform of ƒ does converge almost everywhere to a finite function g such that

$\int_{-\infty}^\infty \frac{|g(x)|^p}{1+x^2}\,dx < \infty.$

This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc (Duren 1970, Theorem 4.2).

Riemann–Hilbert problem

One form of the Riemann–Hilbert problem seeks to identify pairs of functions F+ and F such that F+ is holomorphic on the upper half-plane and F is holomorphic on the lower half-plane, such that for x along the real axis,

$F_+(x) - F_-(x) = f(x)$

where f(x) is some given real-valued function of x ∈ R. The left-hand side of this equation may be understood either as the difference of the limits of F± from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem.

Formally, if F± solve the Riemann–Hilbert problem

$f(x) = F_+(x) - F_-(x),$

then the Hilbert transform of f(x) is given by

$H(f)(x) = \frac{1}{i}(F_+(x) + F_-(x))$ (Pandey 1996, Chapter 2).
↑Jump back a section

Hilbert transform on the circle

For a periodic function f the circular Hilbert transform is defined as

$\tilde f(x)=\frac{1}{2\pi}\text{ p.v.}\int_0^{2\pi}f(t)\cot\left(\frac{x-t}{2}\right)\,dt.$

The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel,

$\cot\left(\frac{x-t}{2}\right)$

is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied (Khvedelidze 2001).

The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel 1/x periodic. More precisely, for x≠0

$\frac{1}{2}\cot\left(\frac{x}{2}\right)=\frac{1}{x}+\sum_{n=1}^\infty \left( \frac{1}{x+2n\pi} + \frac{1}{x-2n\pi} \right) .$

Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence.

Another more direct connection is provided by the Cayley transform C(x) = (xi) / (x + i), which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map

$\displaystyle{Uf(x)=\pi^{-1/2} (x+i)^{-1} f(C(x))}$

of L2(T) onto L2(R). The operator U carries the Hardy space H2(T) onto the Hardy space H2(R).[2]

↑Jump back a section

Hilbert transform in signal processing

Bedrosian's theorem

Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or

$H(f_{LP}(t) f_{HP}(t)) = f_{LP}(t) H(f_{HP}(t))\,$

where fLP and fHP are the low- and high-pass signals respectively (Schreier & Scharf 2010, 14).

Amplitude modulated signals are modeled as the product of a bandlimited "message" waveform, um(t), and a sinusoidal "carrier":

$u(t) = u_m(t) \cdot \cos(\omega t + \phi)\,$

When um(t) has no frequency content above the carrier frequency, $\frac{\omega}{2\pi}\text{ Hz,}$ then by Bedrosian's theorem:

$H(u)(t)= u_m(t) \cdot \sin(\omega t + \phi).$ (Bedrosian 1962)

Analytic representation

In the context of signal processing, the conjugate function interpretation of the Hilbert transform, discussed above, gives the analytic representation of a signal u(t):

$u_a(t) = u(t) + i\cdot H(u)(t),\,$

which is a holomorphic function in the upper half plane.

For the narrowband model [above], the analytic representation is:

 $u_a(t)\,$ $= u_m(t) \cdot \cos(\omega t + \phi) + i\cdot u_m(t) \cdot \sin(\omega t + \phi)\,$ $= u_m(t) \cdot \left[\cos(\omega t + \phi) + i\cdot \sin(\omega t + \phi)\right]\,$

$= u_m(t) \cdot e^{i(\omega t + \phi)}\,$   (by Euler's formula)

(Eq.1)

This complex heterodyne operation shifts all the frequency components of um(t) above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms.

Phase/Frequency modulation

The form:

$u(t) = A\cdot \cos(\omega t + \phi_m(t))\,$

is called phase (or frequency) modulation. The instantaneous frequency is  $\omega + \phi_m^\prime(t).$  For sufficiently large ω, compared to  $\phi_m^\prime$:

$H(u)(t) \approx A\cdot \sin(\omega t + \phi_m(t)),\,$

and:

$u_a(t) \approx A \cdot e^{i(\omega t + \phi_m(t))}.$

Single sideband modulation (SSB)

When um(t) in  Eq.1 is also an analytic representation (of a message waveform), that is:

$u_m(t) = m(t) + i\cdot \widehat{m}(t),$

the result is single-sideband modulation:

$u_a(t) = (m(t) + i\cdot \widehat{m}(t)) \cdot e^{i(\omega t + \phi)},$

whose transmitted component is:

\begin{align} u(t) &= \operatorname{Re}\{u_a(t)\}\\ &= m(t)\cdot \cos(\omega t + \phi) - \widehat{m}(t)\cdot \sin(\omega t + \phi). \end{align}

Causality

The function h with h(t) = 1/(πt) is a non-causal filter and therefore cannot be implemented as is, if u is a time-dependent signal. If u is a function of a non-temporal variable, e.g., spatial, the non-causality might not be a problem. The filter is also of infinite support which may be a problem in certain applications. Another issue relates to what happens with the zero frequency (DC), which can be avoided by assuring that s does not contain a DC-component.

A practical implementation in many cases implies that a finite support filter, which in addition is made causal by means of a suitable delay, is used to approximate the computation. The approximation may also imply that only a specific frequency range is subject to the characteristic phase shift related to the Hilbert transform. See also quadrature filter.

↑Jump back a section

Discrete Hilbert transform

Figure 1: Filter whose frequency response is bandlimited to about 95% of the Nyquist frequency
Figure 2: Hilbert transform filter with a highpass frequency response
Figure 3.
Figure 4. The Hilbert transform of cos(wt) is sin(wt). This figure shows the difference between sin(wt) and an approximate Hilbert transform computed by the MATLAB library function, hilbert(­­­­·)

For a discrete function, u[n], with discrete-time Fourier transform (DTFT), U(ω), the Hilbert transform is given by:

$H(u)[n]\ =\ \scriptstyle{DTFT}^{-1} \displaystyle \{U(\omega)\cdot \sigma_H(\omega)\},$

where:

$\sigma_H(\omega)\ \stackrel{\mathrm{def}}{=}\ \begin{cases} e^{+i\pi/2}, & -\pi < \omega < 0 \\ e^{-i\pi/2}, & 0 < \omega < \pi\\ 0, & \omega=-\pi, 0, \pi. \end{cases}$

And by the convolution theorem, an equivalent formulation is:

$H(u)[n] = u[n] * h[n],\,$

where:

$h[n]\ \stackrel{\mathrm{def}}{=}\ \scriptstyle{DTFT}^{-1} \big \{\displaystyle \sigma_H(\omega)\big \} = \begin{cases} 0, & \mbox{for }n\mbox{ even},\\ \frac2{\pi n} & \mbox{for }n\mbox{ odd}. \end{cases}$

When the convolution is performed numerically, an FIR approximation is substituted for h[n], as shown in Figure 1, and we see rolloff of the passband at the low and high ends (0 and Nyquist), resulting in a bandpass filter. The high end can be restored, as shown in Figure 2, by an FIR that more closely resembles samples of the smooth, continuous-time h(t). But as a practical matter, a properly-sampled u[n] sequence has no useful components at those frequencies. As the impulse response gets longer, the low end frequencies are also restored.[3]

With an FIR approximation to h[n], a method called overlap-save is an efficient way to perform the convolution on a long u[n] sequence. Sometimes the array FFT{h[n]} is replaced by corresponding samples of σH(ω). That has the effect of convolving with the periodic summation[4]:

$h_N[n]\ \stackrel{\text{def}}{=}\ \sum_{m=-\infty}^{\infty} h[n-mN].$

Figure 3 compares a half-cycle of hN[n] with an equivalent length portion of h[n]. The difference between them and the fact that they are not shorter than the segment length (N) are sources of distortion that are managed (reduced) by increasing the segment length and overlap parameters.

The popular MATLAB function, hilbert(u,N), returns an approximate discrete Hilbert transform of u[n] in the imaginary part of the complex output sequence. The real part is the original input sequence, so that the complex output is an analytic representation of u[n]. Similar to the discussion above, hilbert(u, N) only uses samples of the sgn(ω) distribution and therefore convolves with hN[n]. Distortion can be managed by choosing N larger than the actual u[n] sequence and discarding an appropriate number of output samples. An example of this type of distortion is shown in Figure 4.

↑Jump back a section

Notes

1. ^ See:
2. ^ Rosenblum & Rovnyak 1997, p. 92
3. ^ Hilbert studied the discrete transform :
$\frac{1}{n} * u[n]=\sum_{m=-\infty}^\infty \frac{u(m)}{n-m}\qquad m\neq n,$
and showed that for u(n) in ℓ2 the sequence H(u)[n] is also in ℓ2 (see Hilbert's inequality). An elementary proof of this fact can be found in (Grafakos 1994). This transform was used by E. C. Titchmarsh to give alternate proofs of the results of M. Riesz in the continuous case (Titchmarsh 1926; Hardy, Littlewood & Polya 1952, ¶314), but it is not used for pragmatic signal processing.
4. ^
↑Jump back a section

References

↑Jump back a section