User:Jambaugh/Sandbox/Covariance and contravariance of vectors

Linear Spaces edit

In abstract form a linear space (vector space) is any set on which we may preform the operations of taking linear combinations.   That is to say we may add elements (vectors) and multiply them by numbers (scalars).

Example: Column Vectors and Row Vectors edit

In matrix algebra we may define a ( -dimensional) column vector as a   matrix, i.e. a matrix with only 1 column. Dually we can define a ( -dimensional) row vector as a   matrix, i.e. a matrix with one row. The set of all column vectors (of a given dimension) defines a linear space as does the set of all column vectors.

The transpose operation defines an isomorphism mapping between row and column vector spaces of the same dimension.

      • (TODO: Find the name of the theorem that all linear spaces of equal dimension are isomorphic.)

Example: Displacements in Euclidean Space edit

Addition of numbers can be modeled by identifying real numbers as translations of points on the real number line rather than as the points themselves. Hence the number 3 corresponds to the act of shifting points on the real number line to the right by three units. The points themselves can then be indexed by the real number which shifts the origin point to their position. We compose shifts by adding the corresponding numbers -5 + 3 corresponds to shifting right three units and then left five units. (The action acts from the left so the first to act is the right-most.) We may then generalize to higher dimensional "number lines" i.e. express real vectors as displacements of points in the Euclidean plane, or three-space or higher dimensional geometric point spaces.

We may express a position vector for a point simply as the displacement vector shifting the origin point to a the given point.

Basis of a Linear Space edit

A basis of a linear space is a set of elements such that we may express every vector uniquely as a linear combinations of these elements. We thus in a given basis can express the vector by giving the list of coefficients.

For example in 3-dimensional Euclidean space a vector   may be expressed as a linear combination of unit vectors pointing in the three cardinal directions:  

Thus we express the vector   as a set of three coordinates  .

Now the coordinate representation depends on the choice of basis and the same vector may be expressed via different coordinates on a different basis or contra-wise two distinct vectors may have the same form of coordinates but expressing them in different bases.


Example: edit

Consider two bases of the two dimensional Euclidean space:

 

 

where  

Now note the following vectors have the same coordinates in different bases:  

 


Active and Passive Transformations edit

Main article active and passive transformations

When we consider transformations such as rotations as they apply to some important set of vectors there are two complementary modes of application. We may \emph{actively transform} the vectors changing them or we may \emph{passively transform} the vector's representation by (actively) transforming the basis in which the vectors are expressed.

Of interest then is how to express these transformations in terms of transformations of the coordinates (coefficients).

Basis Expansion as an Isomorphism Mapping edit

Now by identifying a vector in a given space with its list of coefficients we are in effect defining an isomorphism mapping between the vector space we are using and certain mathematically defined vector space. Typically we can express the coordinates in the form of a column vector.

NOTE: (The use of column vectors instead of row vectors is so that we express the action of a transformation as left multiplication to fit with our standard notation conventions: Lx -> M[x] where M is a matrix representation of the linear transformation L and [x] is a column vector image of x in a given basis.)

By changing our basis (e.g. with a passive transformation) we are changing this isomorphism mapping so that the original vectors in the space are unchanged but the corresponding column vectors are transformed (since the correspondence itself is changing).

The change in the column vectors is called contravariance since they must change opposite the change in the basis. For example if we doubled the length of all our basis vectors we must halve the coordinates of a given fixed vector.

Other quantities may be defined in terms of the basis and so must change with the basis and they are covariant. For example consider a linear functional mapping vectors to numbers. We may for example define in Euclidean 3-space define the dot product with respect to some vector   so our linear functional is:


 


Observe then that this linear functional can be expressed as a row vector, the transpose of the column vector for  :

 

Under a passive transformation the row vector of components will transform in the same way as the basis vectors themselves and thus they are covariant with the basis.


It is helpful to express the reverse isomorphism of the above mentioned isomorphism map form our space to the space of column vectors by defining the basis as a "row vector of vectors". For example the basis   above is:   and hence:


 

Thence a change of basis   may be expressed by right multiplication by a matrix  :

 

In order for a vector to remain unchanged by such a change of basis its components must then transform via left multiplication by the inverse matrix:

  where   is the column of coordinates in the basis   and   is the column vector of the coordinates in the basis  .

Why Specify Covariance and Contravariance edit

Since we may as easily express coordinates in column or row vector form (and we sometimes desire to always apply transformations via left multiplication) we may write both column vectors expressing coordinates for elements of the linear space, and column vectors meant to be transposed to row vectors to express linear functionals. We must therefore distinguish between contravariant column vectors (directly coordinates of a vector) and covariant column vectors (transposes of row vectors of linear functional).

Index Notation and Einstein's Summation Convention edit

By emphasizing distinctions of covariance and contra-variance we needn't distinquish row or column representation and may work purely with an index notation. Hence the Einstein convention of using super-scripts to index contravariant components and subscripts to index covariant components.

 

 

 

Einstein's summation convention is to drop the summation notation and understand that whenever an index in an expression is repeated in two factors of a term, then there is implicitly a summation over all values of that index. This notation and convention then matches up consistently with the co-variance and contra-variance relationships between components and the objects they represent.