# Linear form

(Redirected from Linear functional)

In linear algebra, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars. In n, if vectors are represented as column vectors, then linear functionals are represented as row vectors, and their action on vectors is given by the matrix product with the row vector on the left and the column vector on the right. In general, if V is a vector space over a field k, then a linear functional f is a function from V to k that is linear:

${\displaystyle f({\vec {v}}+{\vec {w}})=f({\vec {v}})+f({\vec {w}})}$ for all ${\displaystyle {\vec {v}},{\vec {w}}\in V}$
${\displaystyle f(a{\vec {v}})=af({\vec {v}})}$ for all ${\displaystyle {\vec {v}}\in V,a\in k.}$

The set of all linear functionals from V to k, denoted by Homk(V,k), forms a vector space over k with the operations of addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, to distinguish it from the continuous dual space.  It is often written V, V′, or V when the field k is understood.

## Linear functionals on real or complex vector spaces

We assume throughout that all vector spaces that we consider are either vector spaces over the set of real numbers or vector spaces over the set of complex numbers .

We assume that X is a vector space over 𝕂 where 𝕂 is either or .

### Basic definitions

Definition: If X is a vector space over a field 𝕂 then 𝕂 is called X 's underlying (scalar) field and any element of 𝕂 is called a scalar.
Definition: A linear map is a map F : XY between two vector spaces X and Y that have the same underlying scalar field, such that F(x + sy) = F(x) + sF(y) for all x, yX and all scalars s. If X and Y do not necessarily have the same underlying scalar field then we say that F is -linear and is a linear map over if it is a linear map when X and Y are both considered as vector spaces over (i.e. if F(x + ry) = F(x) + rF(y) for all x, yX and all real r).
Definition: The kernel of a map F on X is the set Ker F := { xX : F(x) = 0}. We say that a map is trivial if it is identically equal to 0.
Definition: A functional on a vector space X is a map from X into X's underlying field. A linear functional on X is a functional that is also a linear map.

Observe that if X is a vector space over then a "linear functional on X" is a linear map of the form f : X → ℝ (valued in ), while if X is a vector space over then a "linear functional on X" is a linear map of the form f : X → ℂ (valued in ).

Recall that (resp. ) is a vector space over (resp. ) so we can ask what are the linear functionals on ? A function f : ℝ → ℝ is a linear functional on X = ℝ if and only if it is of the form f(x) = rx for some real number r ∈ ℝ. Note in particular that a function f having the equation of a line f(x) = a + rx with a ≠ 0 (e.g. f(x) = 1 + 2x) is not a linear functional on (since for instance, f(1 + 1) = a + 2r ≠ 2a + 2r = f(1) + f(1)). It is however, a type of function known as an affine linear functional.

A linear functional f is non-trivial if and only if it is surjective (i.e. its range is all of 𝕂).[1]

Definition: The algebraic dual space, or simply the dual space, is the vector space over 𝕂 consisting of all linear functionals on X. It will be denoted by X#.
Definition: If f and g are two real-valued functions and if S is a set that belongs to both of their domains, then we say that g dominates f on S and write fg on S if f(s) ≤ g(s) for all sS. We say that g extends f if every x in the domain of f belongs to the domain of g and f(x) = g(x). If g extends f then we call g a linear extension of f if g is a linear map.

### Relationships with other maps

#### Relationship between real and complex linear functionals

Suppose that X is a vector space over . Let X denote X when it is considered as a vector space over . Note that every linear functional on X is, by definition, complex-valued while every linear functional on X is real-valued.

Definition: By a real linear functional on X, we mean a linear functional on X (i.e. a linear map of the form f : X → ℝ, or more explicitly, a map f : X → ℝ such that f (x + y) = f (x) + f (y) and f (rx) = r f (x) for all x, yX and all real r ∈ ℝ).

If g is a real linear functional on X then g is a linear functional on X if and only if g is trivial (i.e. if gX# then gX# if and only if g = 0) (see footnote for an explanation).[2] Thus, we note the following important technicality:

WARNING: Any non-trivial linear functional on a complex vector space X is not a real linear functional on X. And conversely, any non-trivial real linear functional on a complex vector space X is not a linear functional on X. However, on a real vector space, linear functionals and real linear functionals are one and the same.

However, a real linear functional g on X does induce a canonical linear functional LgX# defined by Lg(x) := g(x) - i g(ix) for all xX, where i := -1.

Now suppose that fX# and let R := Re f (resp. I := Im f) denote the real (resp. imaginary) part of f so that f(x) = R(x) + i I(x). Then for all xX, I(x) = - R(ix) and R(x) = I(ix) so that

f(x) = R(x) - i R(ix) = I(ix) + i I(x).

This shows that f, R, and I each completely determine one another[3] and it follows that R and I are real linear functionals on X and the canonical linear functional on X induced by R is f (i.e. LR = f). Furthermore, for all xX,

|f(x)|2 = |R(ix)|2 + |R(x)|2 = |I(ix)|2 + |I(x)|2
= |R(ix)|2 + |I(ix)|2 = |R(x)|2 + |I(x)|2.

Thus the map gLg, denoted by L, defines a one-to-one correspondence from X# onto X# whose inverse is the map f ↦ Re f. Furthermore, L is linear as a map over (i.e. Lg+h = Lg + Lh and Lrg = r Lg for all r ∈ ℝ and g, hX#). Similarly, the inverse of the surjective map X#X# defined by f ↦ Im f is the map X#X# that sends IX# to the linear functional xI(ix) + i I(x).

This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray).[4]

If f is a linear functional on a real or complex vector space X and if p is a seminorm on X, then |f| ≤ p on X if and only if Re fp on X (see footnote for proof).[5][6]

Topological consequences

If X is a complex topological vector space (TVS), then either all three of f, Re f, and Im f are continuous (resp. bounded), or else all three are discontinuous (resp. unbounded). Moreover, if X is a complex normed space then ||f|| = ||Re f||[7] (where in particular, one side is infinite if and only if the other side is infinite).

#### Relationships with seminorms and sublinear functions

A sublinear function on a vector space X is a function p : X → ℝ that satisfies the following two properties:

1. Subadditivity: p(x+y) ≤ p(x) + p(y) for all x, yX;
2. Positive homogeneity: p(rx) = r p(x) for any positive real r > 0 and any xX.

A seminorm on X is a sublinear function p : X → ℝ that satisfies the following additional property:

1. Absolute homogeneity: p(sx) = |s| p(x) for all xX and all scalars s;

Note that every linear functional on a real vector space is a sublinear function, although there are sublinear functions that are not linear functionals. Unlike linear functionals, a seminorm p is valued in the non-negative real numbers (i.e. p(x) is a real number and p(x) ≥ 0), so the only linear function that is also a seminorm is the trivial (identically) 0 map. However, if f is a linear functional on a vector space X, then its absolute value is a seminorm on X (i.e. the map on X defined by pf (x) := |f(x)| for all xX is a seminorm on X)

If f is a linear functional on a real vector space X and p is a seminorm on X, then fp if and only if |f| ≤ p.[8]

#### Hahn-Banach theorem

The Hahn-Banach theorem is considered one of the most important results of the subfield of mathematics called functional analysis (as the name suggests, linear functionals play an important role in functional analysis). Due to its importance, the Hahn-Banach theorem has been generalized many times and today "Hahn-Banach theorem" refers to any one of a collection of theorems. The general idea behind a Hahn-Banach theorem is that it gives conditions under which a linear functional on a vector subspace M of X (satisfying a certain condition) can be extended to a linear functional on the whole of X (that continues to satisfy that condition). The following is one of many results known collectively as "Hahn-Banach theorems."

Hahn–Banach dominated extension theorem[3](Rudin 1991, Th. 3.2) — If p : X → ℝ is a sublinear function, and f : M → ℝ is a linear functional on a linear subspace MX which is dominated by p on M, then there exists a linear extension F : X → ℝ of f to the whole space X that is dominated by p, i.e., there exists a linear functional F such that

F(m) = f(m)     for all mM,
|F(x)| ≤ p(x)     for all xX.

#### Relationships between multiple linear functionals

Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem.

Theorem[9][10] — If f, g1, ..., gn are linear functionals on X, then the following are equivalent:

1. f can be written as a linear combination of g1, ..., gn (i.e. there exist scalars s1, ..., sn such that f = s1 g1 + ⋅⋅⋅ + sn gn);
2. n
i=1
Ker gi ⊆ Ker f
;
3. there exists a real number r such that |f(x)| ≤ r |gi(x)| for all xX and all i.

If f is a non-trivial linear functional on X with kernel N, xX satisfies f(x) = 1, and U is a balanced subset of X, then N ∩ (x + U) = ∅ if and only if |f(u)| < 1 for all uU.[7]

### Hyperplanes and maximal subspaces

Definition:[4] A vector subspace M of a vector space X is called proper if MX and it is called maximal in X if it is proper and the only vector subspace of X that contains M is X itself.
Definition:[4] A hyperplane in X is a translate of a maximal vector subspace (i.e. it is a set of the form x + M := { x + m : mM} where M is a maximal vector subspace of X and x is any element of X.

A vector subspace M of X is maximal in X if and only if it is the kernel of some non-trivial linear functional on X (i.e. M = ker f for some non-trivial linear functional f on X).[4]

A vsubset H of X is a hyperplane in X if and only if there exists some non-trivial linear functional f on X and some scalar a such that H = { xX : f(x) = a}, or equivalently, if and only if there exists some non-trivial linear functional f on X such that H = { xX : f(x) = 1}.[4]

### Continuous linear functionals

Functional analysis is a field of mathematics dedicated to studying vector spaces over or when they are endowed with a topology making addition and scalar multiplication continuous. Such objects are call topological vector spaces (TVSs). Prominent examples of TVSs include Euclidean space, normed spaces, Banach spaces, and Hilbert spaces.

If X is a topological vector space over 𝕂 then the continuous dual space or simply the dual space is the vector space over 𝕂 consisting of all continuous linear functionals on X. If X is a Banach space, then so is its (continuous) dual space. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensional locally convex space, the continuous dual is a proper subspace of the algebraic dual.

A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that |f| ≤ p.[8]

Every non-trivial continuous linear functional on a TVS X is an open map.[7]

A linear functional on a complex TVS is bounded (resp. continuous) if and only if its real part is bounded (resp. continuous).[3]

A linear functional is continuous if and only if its kernel is closed. [11]

If f is a linear functional on a topological vector space (TVS) X (e.g. a normed space) and if p is a continuous sublinear function on X then |f| ≤ p}} implies that f is continuous.

### Equicontinuity of families of linear functionals

Let X be a topological vector space (TVS) with continuous dual space X'.

For any subset H of X', the following are equivalent:[12]

1. H is equicontinuous;
2. H is contained in the polar of some neighborhood of 0 in X;
3. the (pre)polar of H is a neighborhood of 0 in X;

If H is an equicontinuous subset of X' then the following sets are also equicontinuous: the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.[12] Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of X' is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact[13]).[12]

## Examples and applications

### Linear functionals in Rn

Suppose that vectors in the real coordinate space Rn are represented as column vectors

${\displaystyle {\vec {x}}={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}$

For each row vector [a1an] there is a linear functional f defined by

${\displaystyle f({\vec {x}})=a_{1}x_{1}+\cdots +a_{n}x_{n},}$

and each linear functional can be expressed in this form.

This can be interpreted as either the matrix product or the dot product of the row vector [a1 ... an] and the column vector ${\displaystyle {\vec {x}}}$ :

${\displaystyle f({\vec {x}})=\left[a_{1}\dots a_{n}\right]{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}$

### (Definite) Integration

Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

${\displaystyle I(f)=\int _{a}^{b}f(x)\,dx}$

is a linear functional from the vector space C[ab] of continuous functions on the interval [ab] to the real numbers. The linearity of I follows from the standard facts about the integral:

{\displaystyle {\begin{aligned}I(f+g)&=\int _{a}^{b}[f(x)+g(x)]\,dx=\int _{a}^{b}f(x)\,dx+\int _{a}^{b}g(x)\,dx=I(f)+I(g)\\I(\alpha f)&=\int _{a}^{b}\alpha f(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx=\alpha I(f).\end{aligned}}}

### Evaluation

Let Pn denote the vector space of real-valued polynomial functions of degree ≤n defined on an interval [ab].  If c ∈ [ab], then let evc : PnR be the evaluation functional

${\displaystyle \operatorname {ev} _{c}f=f(c).}$

The mapping f → f(c) is linear since

{\displaystyle {\begin{aligned}(f+g)(c)&=f(c)+g(c)\\(\alpha f)(c)&=\alpha f(c).\end{aligned}}}

If x0, ..., xn are n + 1 distinct points in [a, b], then the evaluation functionals evxi, i = 0, 1, ..., n form a basis of the dual space of Pn.  (Lax (1996) proves this last fact using Lagrange interpolation.)

The integration functional I defined above defines a linear functional on the subspace Pn of polynomials of degree n. If x0, ..., xn are n + 1 distinct points in [a, b], then there are coefficients a0, ..., an for which

${\displaystyle I(f)=a_{0}f(x_{0})+a_{1}f(x_{1})+\dots +a_{n}f(x_{n})}$

for all fPn. This forms the foundation of the theory of numerical quadrature.

This follows from the fact that the linear functionals evxi : ff(xi) defined above form a basis of the dual space of Pn.[14]

### Linear functionals in quantum mechanics

Linear functionals are particularly important in quantum mechanics.  Quantum mechanical systems are represented by Hilbert spaces, which are antiisomorphic to their own dual spaces.  A state of a quantum mechanical system can be identified with a linear functional.  For more information see bra–ket notation.

### Distributions

In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.

## Visualizing linear functionals

Geometric interpretation of a 1-form α as a stack of hyperplanes of constant value, each corresponding to those vectors that α maps to a given scalar value shown next to it along with the "sense" of increase. The      zero plane is through the origin.

In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value.  In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes.  This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).

## Dual vectors and bilinear forms

Linear functionals (1-forms) α, β and their sum σ and vectors u, v, w, in 3d Euclidean space. The number of (1-form) hyperplanes intersected by a vector equals the inner product.[15]

Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism VV : vv such that

${\displaystyle v^{*}(w):=\langle v,w\rangle \quad \forall w\in V,}$

where the bilinear form on V is denoted ⟨ , ⟩ (for instance, in Euclidean space v, w⟩ = vw is the dot product of v and w).

The inverse isomorphism is VV : vv, where v is the unique element of V such that

${\displaystyle \langle v,w\rangle =v^{*}(w)\quad \forall w\in V.}$

The above defined vector vV is said to be the dual vector of vV.

In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem.  There is a mapping VV into the continuous dual space V.  However, this mapping is antilinear rather than linear.

## Bases in finite dimensions

### Basis of the dual space in finite dimensions

Let the vector space V have a basis ${\displaystyle {\vec {e}}_{1},{\vec {e}}_{2},\dots ,{\vec {e}}_{n}}$ , not necessarily orthogonal.  Then the dual space V* has a basis ${\displaystyle {\tilde {\omega }}^{1},{\tilde {\omega }}^{2},\dots ,{\tilde {\omega }}^{n}}$  called the dual basis defined by the special property that

${\displaystyle {\tilde {\omega }}^{i}({\vec {e}}_{j})=\left\{{\begin{matrix}1&\mathrm {if} \ i=j\\0&\mathrm {if} \ i\not =j.\end{matrix}}\right.}$

Or, more succinctly,

${\displaystyle {\tilde {\omega }}^{i}({\vec {e}}_{j})=\delta _{ij}}$

where δ is the Kronecker delta.  Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.

A linear functional ${\displaystyle {\tilde {u}}}$  belonging to the dual space ${\displaystyle {\tilde {V}}}$  can be expressed as a linear combination of basis functionals, with coefficients ("components") ui,

${\displaystyle {\tilde {u}}=\sum _{i=1}^{n}u_{i}\,{\tilde {\omega }}^{i}.}$

Then, applying the functional ${\displaystyle {\tilde {u}}}$  to a basis vector ej yields

${\displaystyle {\tilde {u}}({\vec {e}}_{j})=\sum _{i=1}^{n}\left(u_{i}\,{\tilde {\omega }}^{i}\right){\vec {e}}_{j}=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\vec {e}}_{j}\right)\right]}$

due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals.  Then

{\displaystyle {\begin{aligned}{\tilde {u}}({\vec {e}}_{j})&=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\vec {e}}_{j}\right)\right]=\sum _{i}u_{i}{\delta ^{i}}_{j}\\&=u_{j}.\end{aligned}}}

So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.

### The dual basis and inner product

When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis.  Let V have (not necessarily orthogonal) basis ${\displaystyle {\vec {e}}_{1},\dots ,{\vec {e}}_{n}}$ .  In three dimensions (n = 3), the dual basis can be written explicitly

${\displaystyle {\tilde {\omega }}^{i}({\vec {v}})={1 \over 2}\,\left\langle {\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon ^{ijk}\,({\vec {e}}_{j}\times {\vec {e}}_{k}) \over {\vec {e}}_{1}\cdot {\vec {e}}_{2}\times {\vec {e}}_{3}},{\vec {v}}\right\rangle ,}$

for i = 1, 2, 3, where ε is the Levi-Civita symbol and ${\displaystyle \langle ,\rangle }$  the inner product (or dot product) on V.

In higher dimensions, this generalizes as follows

${\displaystyle {\tilde {\omega }}^{i}({\vec {v}})=\left\langle {\frac {{\underset {{}^{1\leq i_{2}

where ${\displaystyle \star }$  is the Hodge star operator.

## Notes

1. ^ This follows since just as the image of a vector subspace under a linear transformation is a vector subspace, so is the image of X under f. However, the only vector subspaces (that is, 𝕂-subspaces) of 𝕂 are { 0 } and 𝕂 itself.
2. ^ If gX# is non-trivial then the range of g is ; but then g can't belong to X# because if it did, then its range would have to be rather than .
3. ^ a b c Narici 2011, pp. 177-220.
4. Narici 2011, pp. 10-11.
5. ^ Obvious if X is a real vector space. For the non-trivial direction, assume that Re fp on X and let xX. Let r ≥ 0 and t be real numbers such that f(x) = reit. Then |f(x)| = r = f(e-itx) = Re (f(e-itx)) ≤ p(e-itx) = p(x).
6. ^ Wilansky 2013, p. 20.
7. ^ a b c Narici 2011, p. 128. Cite error: The named reference "FOOTNOTENarici2011128" was defined multiple times with different content (see the help page).
8. ^ a b Narici 2011, p. 126.
9. ^ Rudin 1991, pp. 63-64.
10. ^ Narici 2011, pp. 1-18.
11. ^ Rudin 1991, Theorem 1.18
12. ^ a b c Narici 2011, pp. 225-273.
13. ^ Schaefer, Corollary 4.3
14. ^ Lax 1996
15. ^ J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 57. ISBN 0-7167-0344-0.