Open main menu

In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Besides numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any types of mathematical objects on which an operation denoted "+" is defined.

Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.

The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need of parentheses, and the result does not depends on the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with zero element) results, by convention, in 0.

Very often, the elements of a sequence are defined, through regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written 1 + 2 + 3 + 4 + ⋅⋅⋅ + 99 + 100. Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first n natural integers is denoted

For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,[a]

Although such formulas do not always exist, many summation formulas have been discovered. Some of the most common and elementary ones are listed in this article.

Contents

NotationEdit

Capital-sigma notationEdit

 
The summation symbol

Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol,  , an enlarged form of the upright capital Greek letter Sigma. This is defined as:

 

where i represents the index of summation; ai is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by 1 for each successive term, stopping when i = n.[b]

Here is an example showing the summation of squares:

 

Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in:

 

One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. Here are some common examples:

 

is the sum of   over all (integers)   in the specified range,

 

is the sum of   over all elements   in the set  , and

 

is the sum of   over all positive integers   dividing  .[c]

There are also ways to generalize the use of many sigma signs. For example,

 

is the same as

 

A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with  , an enlarged form of the Greek capital letter Pi, replacing the  .

Special casesEdit

It is possible to sum fewer than 2 numbers:

  • If the summation has one summand  , then the evaluated sum is  .
  • If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.

These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if   in the definition above, then there is only one term in the sum; if  , then there is none.

Formal definitionEdit

Summation may be defined recursively as follows

  , for b < a.
 , for ba.

Measure theory notationEdit

In the notation of measure and integration theory, a sum can be expressed as a definite integral,

 

where   is the subset of the integers from   to  , and where   is the counting measure.

Calculus of finite differencesEdit

Given a function f that is defined over the integers in the interval [m, n], one has

 

This is the analogue in calculus of finite differences of the fundamental theorem of calculus, which states

 

where

 

is the derivative of f.

An example of application of the above equation is

 

Using binomial theorem, this may be rewritten

 

The above formula is more commonly used for inverting of the difference operator   defined by

 

where f is a function defined on the nonnegative integers. Thus, given such a function f, the problem is to compute the antidifference of f, that is, a function   such that  , that is,  This function is defined up to the addition of a constant, and may be chosen as[1]

 

There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case of   and, by linearity for every polynomial function of n.

Approximation by definite integralsEdit

Many such approximations can be obtained by the following connection between sums and integrals, which holds for any:

increasing function f:

 

decreasing function f:

 

For more general approximations, see the Euler–Maclaurin formula.

For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance

 

since the right hand side is by definition the limit for   of the left hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.

IdentitiesEdit

The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series.

General identitiesEdit

  (distributivity)
  (commutativity and associativity)
  (index shift)
  for a bijection σ from a finite set A onto a set B (index change); this generalizes the preceding formula.
  (splitting a sum, using associativity)
  (a variant of the preceding formula)
  (commutativity and associativity, again)
  (another application of commutativity and associativity)
  (splitting a sum into its odd and even parts, and changing the indices)
  (distributivity)
  (distributivity allows factorization)
  (the logarithm of a product is the sum of the logarithms of the factors)
  (the exponential of a sum is the product of the exponential of the summands)

Powers and logarithm of arithmetic progressionsEdit

  for every c that does not depend on i
  (Sum of the simplest arithmetic progression, consisting of the n first natural numbers.)[2][full citation needed]
  (Sum of first odd natural numbers)
  (Sum of first even natural numbers)
  (A sum of logarithms is the logarithm of the product)
  (Sum of the first squares, see square pyramidal number.) [2]
  (Nicomachus's theorem) [2]

More generally,

 

where   denotes a Bernoulli number (that is Faulhaber's formula).

Summation index in exponentsEdit

In the following summations, a is supposed to be different of 1.

  (sum of a geometric progression)
  (special case for a = 1/2)
  (a times the derivative with respect to a of the geometric progression)
 
(sum of an arithmetico–geometric sequence)

Binomial coefficients and factorialsEdit

There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.

Involving the binomial theoremEdit

  the binomial theorem
  the special case where a = b = 1
 , the special case where p = a = 1 – b, which, for   expresses the sum of the binomial distribution
  the value at a = b = 1 of the derivative with respect to a of the binomial theorem
  the value at a = b = 1 of the antiderivative with respect to a of the binomial theorem

Involving permutation numbersEdit

In the following summations,   is the number of k-permutations of n.

 
 
 , where and   denotes the floor function.

OthersEdit

 
 
 
 
 

Harmonic numbersEdit

  (that is the nth harmonic number)
  (that is a generalized harmonic number)

Growth ratesEdit

The following are useful approximations (using theta notation):

  for real c greater than −1
  (See Harmonic number)
  for real c greater than 1
  for non-negative real c
  for non-negative real c, d
  for non-negative real b > 1, c, d

See alsoEdit

NotesEdit

  1. ^ For details, see Triangular number.
  2. ^ For a detailed exposition on summation notation, and arithmetic with sums, see Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). "Chapter 2: Sums". Concrete Mathematics: A Foundation for Computer Science (2nd Edition) (PDF). Addison-Wesley Professional. ISBN 978-0201558029.CS1 maint: Uses authors parameter (link)[permanent dead link]
  3. ^ Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (  through  ) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see   instead of   in the above formulae involving  . See also typographical conventions in mathematical formulae.

SourcesEdit

  1. ^ Handbook of Discrete and Combinatorial Mathematics, Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1.
  2. ^ a b c CRC, p 52

External linksEdit