# Algebra representation

In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.

## Examples

### Linear complex structure

One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as ${\displaystyle \mathbb {C} =\mathbb {R} [i]/(i^{2}+1),}$  which corresponds to i2 = −1 . Then a representation of C is a real vector space V, together with an action of C on V (a map ${\displaystyle \mathbb {C} \to \mathrm {End} (V)}$ ). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I.

### Polynomial algebras

Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted ${\displaystyle K[T_{1},\dots ,T_{k}],}$  meaning the representation of the abstract algebra ${\displaystyle K[x_{1},\dots ,x_{k}]}$  where ${\displaystyle x_{i}\mapsto T_{i}.}$

A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.

Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by ${\displaystyle K[T]}$  and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.

In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.

## Weights

Eigenvalues and eigenvectors can be generalized to algebra representations.

The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation ${\displaystyle \lambda \colon \ A\to R}$  (i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative).[note 1] This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space.

The case of the eigenvalue of a single operator corresponds to the algebra ${\displaystyle R[T],}$  and a map of algebras ${\displaystyle R[T]\to R}$  is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing ${\displaystyle A\times M\to M}$  is bilinear, "which multiple" is an A-linear functional of A (an algebra map AR), namely the weight. In symbols, a weight vector is a vector ${\displaystyle m\in M}$  such that ${\displaystyle am=\lambda (a)m}$  for all elements ${\displaystyle a\in A,}$  for some linear functional ${\displaystyle \lambda }$  – note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.

Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra ${\displaystyle {\mathcal {A}}}$  – equivalently, it vanishes on the derived algebra – in terms of matrices, if ${\displaystyle v}$  is a common eigenvector of operators ${\displaystyle T}$  and ${\displaystyle U}$ , then ${\displaystyle TUv=UTv}$  (because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra ${\displaystyle \mathbf {F} [T_{1},\dots ,T_{k}]}$  in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a ${\displaystyle k}$ -tuple of scalars ${\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{k})}$  corresponding to the eigenvalue of each matrix, and hence geometrically to a point in ${\displaystyle k}$ -space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.

As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on ${\displaystyle k}$  generators, it corresponds geometrically to an algebraic variety in ${\displaystyle k}$ -dimensional space, and the weight must fall on the variety – i.e., it satisfies defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.