# Linear system

In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.

## Definition

Block diagram illustrating the additivity property for a deterministic continuous-time SISO system. The system satisfies the additivity property or is additive if and only if $y_{3}(t)=y_{1}(t)+y_{2}(t)$  for all time $t$  and for all inputs $x_{1}(t)$  and $x_{2}(t)$ . Click image to expand it.

Block diagram illustrating the homogeneity property for a deterministic continuous-time SISO system. The system satisfies the homogeneity property or is homogeneous if and only if $y_{2}(t)=a\,y_{1}(t)$  for all time $t$ , for all real constant $a$  and for all input $x_{1}(t)$ . Click image to expand it.

Block diagram illustrating the superposition principle for a deterministic continuous-time SISO system. The system satisfies the superposition principle and is thus linear if and only if $y_{3}(t)=a_{1}\,y_{1}(t)+a_{2}\,y_{2}(t)$  for all time $t$ , for all real constants $a_{1}$  and $a_{2}$  and for all inputs $x_{1}(t)$  and $x_{2}(t)$ . Click image to expand it.

A general deterministic system can be described by an operator, H, that maps an input, x(t), as a function of t to an output, y(t), a type of black box description.

A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.)

The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs.

In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.

Mathematically, for a continuous-time system, given two arbitrary inputs

$x_{1}(t)$
$x_{2}(t)$

as well as their respective zero-state outputs

$y_{1}(t)=H\left\{x_{1}(t)\right\}$
$y_{2}(t)=H\left\{x_{2}(t)\right\}$

then a linear system must satisfy

$\alpha y_{1}(t)+\beta y_{2}(t)=H\left\{\alpha x_{1}(t)+\beta x_{2}(t)\right\}$

for any scalar values α and β, for any input signals x1(t) and x2(t), and for all time t.

The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, the system can be solved for x(t).

The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components.

Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).

Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.

A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.

The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (${\mathbf {x} }_{1}(t)$ , ${\mathbf {x} }_{2}(t)$ , ${\mathbf {y} }_{1}(t)$ , ${\mathbf {y} }_{2}(t)$ ) are considered instead of input and output signals ($x_{1}(t)$ , $x_{2}(t)$ , $y_{1}(t)$ , $y_{2}(t)$ .)

This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.

### Examples

A simple harmonic oscillator obeys the differential equation:

$m{\frac {d^{2}(x)}{dt^{2}}}=-kx$ .

If

$H(x(t))=m{\frac {d^{2}(x(t))}{dt^{2}}}+kx(t)$ ,

then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system.

Other examples of linear systems include those described by $y(t)=k\,x(t)$ , $y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}$ , $y(t)=k\,\int _{-\infty }^{t}x(\tau )\mathrm {d} \tau$ , and any system described by ordinary linear differential equations. Systems described by $y(t)=k$ , $y(t)=k\,x(t)+k_{0}$ , $y(t)=\sin {[x(t)]}$ , $y(t)=\cos {[x(t)]}$ , $y(t)=x^{2}(t)$ , $y(t)={\sqrt {x(t)}}$ , $y(t)=|x(t)|$ , and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle.

The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by $y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}$  (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.

Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by $y(t)=(1.5+\cos {(t)})\,x(t)$ . It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form $x(t)=\cos {(3t)}$ , using product-to-sum trigonometric identities it can be easily shown that the output is $y(t)=1.5\cos {(3t)}+0.5\cos {(2t)}+0.5\cos {(4t)}$ , that is, the output doesn't consist only of sinusoids of same frequency as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input.

## Time-varying impulse response

The time-varying impulse response h(t2, t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is

$x(t)=\delta (t-t_{1})$

where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is

$y(t)|_{t=t_{2}}=h(t_{2},t_{1})$

then the function h(t2, t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:

$h(t_{2},t_{1})=0,t_{2}

## The convolution integral

The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:

$y(t)=\int _{-\infty }^{t}h(t,t')x(t')dt'=\int _{-\infty }^{\infty }h(t,t')x(t')dt'$

If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = tt' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways,

$y(t)=\int _{-\infty }^{t}h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )d\tau =\int _{0}^{\infty }h(\tau )x(t-\tau )d\tau$

Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:

$H(s)=\int _{0}^{\infty }h(t)e^{-st}\,dt.$

In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = follows the formula for the frequency response function:

$H(i\omega )=\int _{-\infty }^{\infty }h(t)e^{-i\omega t}dt$

## Discrete-time systems

The output of any discrete time linear system is related to the input by the time-varying convolution sum:

$y[n]=\sum _{m=-\infty }^{n}{h[n,m]x[m]}=\sum _{m=-\infty }^{\infty }{h[n,m]x[m]}$

or equivalently for a time-invariant system on redefining h(),

$y[n]=\sum _{k=0}^{\infty }{h[k]x[n-k]}=\sum _{k=-\infty }^{\infty }{h[k]x[n-k]}$

where

$k=n-m\,$

represents the lag time between the stimulus at time m and the response at time n.