In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.
If u = u(x) and du = u'(x) dx, while v = v(x) and dv = v'(x) dx, then integration by parts states that:
This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values x = a and x = b and applying the fundamental theorem of calculus gives the definite integral version:
The original integral ∫ uv′ dx contains the derivativev′; to apply the theorem, one must find v, the antiderivative of v', then evaluate the resulting integral ∫ vu′ dx.
It is not necessary for u and v to be continuously differentiable. Integration by parts works if u is absolutely continuous and the function designated v′ is Lebesgue integrable (but not necessarily continuous). (If v′ has a point of discontinuity then its antiderivative v may not have a derivative at that point.)
If the interval of integration is not compact, then it is not necessary for u to be absolutely continuous in the whole interval or for v′ to be Lebesgue integrable in the interval, as a couple of examples (in which u and v are continuous and continuously differentiable) will show. For instance, if
u is not absolutely continuous on the interval [1, ∞), but nevertheless
so long as is taken to mean the limit of as and so long as the two terms on the right-hand side are finite. This is only true if we choose Similarly, if
v′ is not Lebesgue integrable on the interval [1, ∞), but nevertheless
with the same interpretation.
One can also easily come up with similar examples in which u and v are not continuously differentiable.
Further, if is a function of bounded variation on the segment and is differentiable on then
where denotes the signed measure corresponding to the function of bounded variation , and functions are extensions of
to which are respectively of bounded variation and differentiable.
Graphical interpretation of the theorem. The pictured curve is parametrized by the variable t.
Consider a parametric curve by (x, y) = (f(t), g(t)). Assuming that the curve is locally one-to-one and integrable, we can define
The area of the blue region is
Similarly, the area of the red region is
The total area A1 + A2 is equal to the area of the bigger rectangle, x2y2, minus the area of the smaller one, x1y1:
Or, in terms of t,
Or, in terms of indefinite integrals, this can be written as
Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region.
This visualization also explains why integration by parts may help find the integral of an inverse function f−1(x) when the integral of the function f(x) is known. Indeed, the functions x(y) and y(x) are inverses, and the integral ∫ xdy may be calculated as above from knowing the integral ∫ ydx. In particular, this explains use of integration by parts to integrate logarithm and inverse trigonometric functions.
Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions u(x)v(x) such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take:
On the right-hand side, u is differentiated and v is integrated; consequently it is useful to choose u as a function that simplifies when differentiated, or to choose v as a function that simplifies when integrated. As a simple example, consider:
Since the derivative of ln(x) is 1/x, one makes (ln(x)) part u; since the antiderivative of 1/x2 is −1/x, one makes 1/x2dx part dv. The formula now yields:
The antiderivative of −1/x2 can be found with the power rule and is 1/x.
Alternatively, one may choose u and v such that the product u′ (∫vdx) simplifies due to cancellation. For example, suppose one wishes to integrate:
If we choose u(x) = ln(|sin(x)|) and v(x) = sec2x, then u differentiates to 1/ tan x using the chain rule and v integrates to tan x; so the formula gives:
The integrand simplifies to 1, so the antiderivative is x. Finding a simplifying combination frequently involves experimentation.
In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below.
Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x is also known.
The first example is ∫ ln(x) dx. We write this as:
The function which is to be dv is whichever comes last in the list: functions lower on the list have easier antiderivatives than the functions above them. The rule is sometimes written as "DETAIL" where D stands for dv.
To demonstrate the LIATE rule, consider the integral
Following the LIATE rule, u = x, and dv = cos(x) dx, hence du = dx, and v = sin(x), which makes the integral become
In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos(x) was chosen as u, and x dx as dv, we would have the integral
which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere.
Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate
Considering a second derivative of in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS:
Extending this concept of repeated partial integration to derivatives of degree n leads to
This concept may be useful when the successive integrals of are readily available (e.g., plain exponentials or sine and cosine, as in Laplace or Fourier transforms), and when the nth derivative of vanishes (e.g., as a polynomial function with degree ). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes.
In the course of the above repetition of partial integrations the integrals
get related. This may be interpreted as arbitrarily "shifting" derivatives between and within the integrand, and proves useful, too (see Rodrigues' formula).
The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration" and was featured in the film Stand and Deliver.
For example, consider the integral
Begin to list in column A the function and its subsequent derivatives until zero is reached. Then list in column B the function and its subsequent integrals until the size of column B is the same as that of column A. The result is as follows:
A: derivatives u(i)
B: integrals v(n−i)
The product of the entries in row i of columns A and B together with the respective sign give the relevant integrals in step i in the course of repeated integration by parts. Step i = 0 yields the original integral. For the complete result in step i > 0 the ith integral must be added to all the previous products (0 ≤ j < i) of the jth entry of column A and the (j + 1)st entry of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given jth sign. This process comes to a natural halt, when the product, which yields the integral, is zero (i = 4 in the example). The complete result is the following (with the alternating signs in each term):
The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions and their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index i.This can happen, expectably, with exponentials and trigonometric functions. As an example consider
A: derivatives u(i)
B: integrals v(n−i)
In this case the product of the terms in columns A and B with the appropriate sign for index i = 2 yields the negative of the original integrand (compare rows i = 0and i = 2).
Observing that the integral on the RHS can have its own constant of integration , and bringing the abstract integral to the other side, gives
Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function u and vector-valued function (vector field) V.