This article does not cite any sources. (December 2009) (Learn how and when to remove this template message)
A linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.
A general deterministic system can be described by an operator, , that maps an input, , as a function of to an output, , a type of black box description. Linear systems satisfy the property of superposition. Given two valid inputs
as well as their respective outputs
then a linear system must satisfy
for any scalar values and .
The system is then defined by the equation , where is some arbitrary function of time, and is the system state. Given and , the system can be solved for . For example, a simple harmonic oscillator obeys the differential equation:
then is a linear operator. Letting , we can rewrite the differential equation as , which shows that a simple harmonic oscillator is a linear system.
The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function in terms of unit impulses or frequency components.
Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.
Time-varying impulse responseEdit
The time-varying impulse response h(t2,t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is
where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is
then the function h(t2,t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:
The convolution integralEdit
The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:
If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h() is a function only of the time difference τ = t-t' which is zero for τ<0 (namely t<t'). By redefinition of h() it is then possible to write the input-output relation equivalently in any of the ways,
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:
In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function:
Discrete time systemsEdit
The output of any discrete time linear system is related to the input by the time-varying convolution sum:
or equivalently for a time-invariant system on redefining h(),
represents the lag time between the stimulus at time m and the response at time n.
- Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
- Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X
- Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3
- Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8
- Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9