Open main menu

Wikipedia β

In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[1][2] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Automatic differentiation is not:

Figure 1: How automatic differentiation relates to symbolic differentiation

These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems, at the expense of introducing more software dependencies.

Contents

The chain rule, forward and reverse accumulationEdit

Fundamental to AD is the decomposition of differentials provided by the chain rule. For the simple composition y = g(h(x)) = g(w) the chain rule gives

 

Usually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first one computes dw/dx and then dy/dw), while reverse accumulation has the traversal from outside to inside.

Generally, both forward and reverse accumulation are specific manifestations of applying the operator of program composition, with the appropriate one of the two mappings   being fixed.

Forward accumulationEdit

 
Figure 2: Example of forward accumulation with computational graph

In forward accumulation AD, one first fixes the independent variable to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, one can do so by repeatedly substituting the derivative of the inner functions in the chain rule:

 

This can be generalized to multiple variables as a matrix product of Jacobians.

Compared to reverse accumulation, forward accumulation is very natural and easy to implement as the flow of derivative information coincides with the order of evaluation. One simply augments each variable w with its derivative (stored as a numerical value, not a symbolic expression),

 

as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.

As an example, consider the function:

 

For clarity, the individual sub-expressions have been labeled with the variables wi.

The choice of the independent variable to which differentiation is performed affects the seed values 1 and 2. Suppose one is interested in the derivative of this function with respect to x1. In this case, the seed values should be set to:

 

With the seed values set, one may then propagate the values using the chain rule as shown in both the table below. Figure 2 shows a pictorial depiction of this process as a computational graph.

 

To compute the gradient of this example function, which requires the derivatives of f with respect to not only x1 but also x2, one must perform an additional sweep over the computational graph using the seed values  .

The computational complexity of one sweep of forward accumulation is proportional to the complexity of the original code.

Forward accumulation is more efficient than reverse accumulation for functions f : ℝn → ℝm with mn as only n sweeps are necessary, compared to m sweeps for reverse accumulation.

Reverse accumulationEdit

 
Figure 3: Example of reverse accumulation with computational graph

In reverse accumulation AD, one first fixes the dependent variable to be differentiated and computes the derivative with respect to each sub-expression recursively. In a pen-and-paper calculation, one can perform the equivalent by repeatedly substituting the derivative of the outer functions in the chain rule:

 

In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar (); it is a derivative of a chosen dependent variable with respect to a subexpression w:

 

Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is real-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed in order to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables wi as well as the instructions that produced them in a data structure known as a Wengert list (or "tape"),[3][4] which may represent a significant memory issue if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as checkpointing.

The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order):

 

The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function y = f(x) in the primal causes = ȳ f′(x) in the adjoint; etc.

Reverse accumulation is more efficient than forward accumulation for functions f : ℝn → ℝm with mn as only m sweeps are necessary, compared to n sweeps for forward accumulation.

Reverse mode AD was first published in 1970 by Seppo Linnainmaa in his master thesis.[5][6][7]

Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.

Beyond forward and reverse accumulationEdit

Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of f : ℝn → ℝm with a minimum number of arithmetic operations is known as the optimal Jacobian accumulation (OJA) problem, which is NP-complete.[8] Central to this proof is the idea that there may exist algebraic dependencies between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.

Automatic differentiation using dual numbersEdit

Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number which will represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers. This approach was generalized by the theory of operational calculus on programming spaces (see Analytic programming space), through tensor algebra of the dual space.

Replace every number   with the number  , where   is a real number, but   is an abstract number with the property   (an infinitesimal; see Smooth infinitesimal analysis). Using only this, we get for the regular arithmetic

 

and likewise for subtraction and division.

Now, we may calculate polynomials in this augmented arithmetic. If  , then

 

where   denotes the derivative of   with respect to its first argument, and  , called a seed, can be chosen arbitrarily.

The new arithmetic consists of ordered pairs, elements written  , with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions we obtain a list of the basic arithmetic and some standard functions for the new arithmetic:

 

and in general for the primitive function  ,

 

where   and   are the derivatives of   with respect to its first and second arguments, respectively.

When a binary basic arithmetic operation is applied to mixed arguments—the pair   and the real number  —the real number is first lifted to  . The derivative of a function   at the point   is now found by calculating   using the above arithmetic, which gives   as the result.

Vector arguments and functionsEdit

Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute  , the directional derivative   of   at   in the direction  , this may be calculated as   using the same arithmetic as above. If all the elements of   are desired, then   function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.

High order and many variablesEdit

The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow very complicated: complexity will be quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows to efficiently compute using functions as if they were a new data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted. A rigorous, general formulation is achieved through the tensor series expansion using operational calculus on programming spaces.

Operational calculus on programming spacesEdit

Operational calculus on programming spaces [9] generalizes concepts of automatic differentiation and provides deep learning with a formal calculus. This formulation using tensor algebra is a generalization of the dual numbers approach.

Differentiable programming spaceEdit

A differentiable programming space   is any subspace of   such that

 

where   is the tensor algebra of the dual space  . When all elements of   are analytic, we call   an analytic programming space.

Theorem.
Any differentiable programming space   is an infinitely differentiable programming space, meaning that
 
for any  . If all elements of   are analytic, than so are the elements of  .
Definition.
Let   be a differentiable programming space. The space   spanned by   over  , where  , is called a differentiable programming space of order  .
Corollary.
A differentiable programming space of order  ,  , can be embedded into the tensor product of the function space   and the subspace   of the tensor algebra of the dual of the virtual space  .
By taking the limit as  , we consider
 
where   is the tensor series algebra, the algebra of the infinite formal tensor series, which is a completion of the tensor algebra   in suitable topology.

Proofs can be found in.[9]

This means that we can represent calculation of derivatives of the map  , with only one mapping  . We define the operator   as a direct sum of operators

 

The image   is a multitensor of order  , which is a direct sum of the maps value and all derivatives of order  , all evaluated at the point  :

 

The operator   satisfies the recursive relation.

 

that can be used to recursively construct programming spaces of arbitrary order. Only explicit knowledge of   is required for the construction of   from  , which is evident from the above theorem.

Virtual tensor machineEdit

The paper [9] proposed an abstract virtual machine capable of constructing and implementing the theory. Such a machine provides a framework for analytic study of algorithmic procedures through algebraic means.

Claim
The tuple   and the belonging tensor series algebra are sufficient conditions for the existence and construction of infinitely differentiable programing spaces  , through linear combinations of elements of  .

This claim allows a simple definition of such a machine.

Definition(Virtual tensor machine)
The tuple   is a virtual tensor machine, where
  •   is a finite dimensional vector space
  •   is the virtual memory space
  •   is an analytic programming space over  

Tensor series expansionEdit

Expansion into a series offers valuable insights into programs through methods of analysis.

There exists a space spanned by the set   over a field  . Thus, the expression

 

is well defined. The operator   is a mapping between function spaces

 

It also defines a map

 

by taking the image of the map   at a certain point  .

We may construct a map from the space of programs, to the space of polynomials. Note that the space of multivariate polynomials   is isomorphic to symmetric algebra  , which is in turn a quotient of tensor algebra  . To any element of   one can attach corresponding element of   namely a polynomial map  . Thus, we consider the completion of the symmetric algebra   as the Formal power series  , which is in turn isomorphic to a quotient of tensor series algebra  , arriving at

 

For any element  , the expression   is a map  , mapping a program to a Formal power series. We can express the correspondence between multi-tensors in   and polynomial maps   given by multiple contractions for all possible indices.

Theorem.
For a program   the expansion into an infinite tensor series at the point   is expressed by multiple contractions
 

Proof can be found in.[9] Evaluated at  , the operator is a generalization of the Shift operator widely used in physics. For a specific   it is heron denoted by

 

When the choice of   is arbitrary, we omit it from expressions for brevity. Following this work, a similar approach was taken by others [10].

Operator of program compositionEdit

Theory offers a generalization of both forward and reverse mode of automatic differentiation to arbitrary order, under a single invariant operator in the theory. This condenses complex notions to simple expressions allowing meaningful manipulations before being applied to a particular programming space.

Theorem.
Composition of maps   is expressed as
 
where   is an operator on pairs of maps  , where   is applied to   and   to  .

Proof can be found in.[9]

Both forward and reverse mode (generalized to arbitrary order) are obtainable using this operator, by fixing the appropriate one of the two maps. This generalizes both concepts under a single operator in the theory. For example, by considering projections of the operator onto the space spanned by  , and fixing the second map  , we retrieve the basic first order forward mode of automatic differentiation, or reverse mode, by fixing  .

Thus the operator alleviates the need for explicit implementation of the higher order chain rule (see Faà di Bruno's formula), as it is encoded in the structure of the operator itself, which can be efficiently implemented by manipulating its generating map (see [9]).

Order reduction for nested applicationsEdit

It is useful to be able to use the  -th derivative of a program   as part of a different differentiable program  . As such, we must be able to treat the derivative itself as a differentiable program  , while only coding the original program  .

Theorem
There exists a reduction of order map   satisfying
 
for each  , where   is the projection of the operator   onto the set  .

By the above Theorem,  -differentiable  -th derivatives of a program   can be extracted by

 

Thus, we gained the ability of writing a differentiable program acting on derivatives of another program, stressed as crucial by other authors.[11]

ImplementationEdit

Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading.

Source code transformation (SCT)Edit

 
Figure 4: Example of how source code transformation could work

The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.

Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult.

Operator overloading (OO)Edit

 
Figure 5: Example of how operator overloading could work

Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations.

Operator overloading for forward accumulation is easy to implement, and also possible for reverse accumulation. However, current compilers lag behind in optimizing the code when compared to forward accumulation.

Operator overloading, for both forward and reverse accumulation, can be well-suited to applications where the objects are vectors of real numbers rather than scalars. This is because the tape then comprises vector operations; this can facilitate computationally efficient implementations where each vector operation performs many scalar operations. Vector adjoint algorithmic differentiation (vector AAD) techniques may be used, for example, to differentiate values calculated by Monte-Carlo simulation.

Examples of operator-overloading implementations of automatic differentiation in C++ are the Adept and Stan libraries.

ReferencesEdit

  1. ^ Neidinger, Richard D. (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming" (PDF). SIAM Review. 52 (3): 545–563. doi:10.1137/080743627. 
  2. ^ http://www.ec-securehost.com/SIAM/SE24.html
  3. ^ R.E. Wengert (1964). "A simple automatic derivative evaluation program". Comm. ACM. 7: 463–464. doi:10.1145/355586.364791. 
  4. ^ Bartholomew-Biggs, Michael; Brown, Steven; Christianson, Bruce; Dixon, Laurence (2000). "Automatic differentiation of algorithms" (PDF). Journal of Computational and Applied Mathematics. 124 (1-2): 171–190. doi:10.1016/S0377-0427(00)00422-2. 
  5. ^ Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.
  6. ^ Linnainmaa, Seppo (1976). Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics, 16(2), 146-160.
  7. ^ Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389-400.
  8. ^ Naumann, Uwe (April 2008). "Optimal Jacobian accumulation is NP-complete". Mathematical Programming. 112 (2): 427–441. doi:10.1007/s10107-006-0042-z.  |contribution= ignored (help)
  9. ^ a b c d e f Sajovic, Žiga; Vuk, Martin (2016). "Operational calculus on programming spaces". arXiv:1610.07690  [math.FA]. 
  10. ^ izzo, Dario; Biscani, Francesci (2016). "Differentiable Genetic Programming". arXiv:1611.04766  [math.FA]. 
  11. ^ Pearlmutter, Barak A.; Siskind, Jeffrey M (May 2008). "PPutting the Automatic Back into AD: Part II". ECE Technical Reports. 

LiteratureEdit

External linksEdit