Longest increasing subsequence

In computer science, the longest increasing subsequence problem aims to find a subsequence of a given sequence in which the subsequence's elements are sorted in an ascending order and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous or unique. The longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics.[1][2] The longest increasing subsequence problem is solvable in time where denotes the length of the input sequence.[3]

Example edit

In the first 16 terms of the binary Van der Corput sequence

0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15

one of the longest increasing subsequences is

0, 2, 6, 9, 11, 15.

This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not the only solution: for instance,

0, 4, 6, 9, 11, 15
0, 2, 6, 9, 13, 15
0, 4, 6, 9, 13, 15

are other increasing subsequences of equal length in the same input sequence.

Relations to other algorithmic problems edit

The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence   is the longest common subsequence of   and   where   is the result of sorting   However, for the special case in which the input is a permutation of the integers   this approach can be made much more efficient, leading to time bounds of the form  [4]

The largest clique in a permutation graph corresponds to the longest decreasing subsequence of the permutation that defines the graph (assuming the original non-permuted sequence is sorted from lowest value to highest). Similarly, the maximum independent set in a permutation graph corresponds to the longest non-decreasing subsequence. Therefore, longest increasing subsequence algorithms can be used to solve the clique problem efficiently in permutation graphs.[5]

In the Robinson–Schensted correspondence between permutations and Young tableaux, the length of the first row of the tableau corresponding to a permutation equals the length of the longest increasing subsequence of the permutation, and the length of the first column equals the length of the longest decreasing subsequence.[3]

Efficient algorithms edit

The algorithm outlined below solves the longest increasing subsequence problem efficiently with arrays and binary searching. It processes the sequence elements in order, maintaining the longest increasing subsequence found so far. Denote the sequence values as   etc. Then, after processing   the algorithm will have stored an integer   and values in two arrays:

  •   — stores the length of the longest increasing subsequence found so far.
  •   — stores the index   of the smallest value   such that there is an increasing subsequence of length   ending at   in the range   Explicitly, suppose that   denotes the set of all indices   such that   and there exists an increasing subsequence of length   ending at   Then   is the index in   for which   is minimized; meaning that   and   (or equivalently,   and for every    ); if multiple indices satisfy this condition then   is the largest one.
    • To clarify, "there exists an increasing subsequence of length   ending at  " means that there exist   indices   ending at   such that  
    • Note that   because   represents the length of the increasing subsequence, and   represents the index of its termination.
    • The length of   is   more than the length of   but it is possible that not all elements in this array are used by the algorithm (in fact, if the longest increasing sequence has length   then only   are used by the algorithm). If however   is used/defined then   (and moreover, at iteration     will also hold).   is undefined since sequences of length   have no ending index (  can be any value).
  •   — stores the index of the predecessor of   in the longest increasing subsequence ending at  
    • The length of   is equal to that of  
    • If   then   while   is undefined since   has no predecessor (  can be any value).

Because the algorithm below uses zero-based numbering, for clarity   is padded with   which goes unused so that   corresponds to a subsequence of length   A real implementation can skip   and adjust the indices accordingly.

Note that, at any point in the algorithm, the sequence

 
is increasing. For, if there is an increasing subsequence of length   ending at   then there is also a subsequence of length   ending at a smaller value: namely the one ending at   Thus, we may do binary searches in this sequence in logarithmic time.

The algorithm, then, proceeds as follows:

 
A demo of the code.
P = array of length N
M = array of length N + 1
M[0] = -1 // undefined so can be set to any value

L = 0
for i in range 0 to N-1: //N-1 included
    // Binary search for the smallest positive l ≤ L
    // such that X[M[l]] > X[i]
    lo = 1
    hi = L + 1
    while lo < hi:
        mid = lo + floor((hi-lo)/2) // lo <= mid < hi
        if X[M[mid]] >= X[i]
            hi = mid
        else: // if X[M[mid]] < X[i]
            lo = mid + 1

    // After searching, lo == hi is 1 greater than the
    // length of the longest prefix of X[i]
    newL = lo

    // The predecessor of X[i] is the last index of 
    // the subsequence of length newL-1
    P[i] = M[newL-1]
    M[newL] = i
    
    if newL > L:
        // If we found a subsequence longer than any we've
        // found yet, update L
        L = newL

// Reconstruct the longest increasing subsequence
// It consists of the values of X at the L indices:
// ...,  P[P[M[L]]], P[M[L]], M[L]
S = array of length L
k = M[L]
for j in range L-1 to 0: //0 included
    S[j] = X[k]
    k = P[k]

return S

Because the algorithm performs a single binary search per sequence element, its total time can be expressed using Big O notation as   Fredman (1975) discusses a variant of this algorithm, which he credits to Donald Knuth; in the variant that he studies, the algorithm tests whether each value   can be used to extend the current longest increasing sequence, in constant time, prior to doing the binary search. With this modification, the algorithm uses at most   comparisons in the worst case, which is optimal for a comparison-based algorithm up to the constant factor in the   term.[6]

Example run

Using X = [2, 8, 9, 5, 6, 7, 1]
Values stored in variables X[i] newL P M X[M] L
Before for i loop P = [] M = [-1] X[M] = [N/A] L = 0
At end of loop i = 0 X[i] = 2 newL = 1 P = [-1] M = [-1, 0] X[M] = [N/A, 2] L = 1
At end of loop i = 1 X[i] = 8 newL = 2 P = [-1, 0] M = [-1, 0, 1] X[M] = [N/A, 2, 8] L = 2
At end of loop i = 2 X[i] = 9 newL = 3 P = [-1, 0, 1] M = [-1, 0, 1, 2] X[M] = [N/A, 2, 8, 9] L = 3
At end of loop i = 3 X[i] = 5 newL = 2 P = [-1, 0, 1, 0] M = [-1, 0, 3, 2] X[M] = [N/A, 2, 5, 9] L = 3
At end of loop i = 4 X[i] = 6 newL = 3 P = [-1, 0, 1, 0, 3] M = [-1, 0, 3, 4] X[M] = [N/A, 2, 5, 6] L = 3
At end of loop i = 5 X[i] = 7 newL = 4 P = [-1, 0, 1, 0, 3, 4] M = [-1, 0, 3, 4, 5] X[M] = [N/A, 2, 5, 6, 7] L = 4
At end of loop i = 6 X[i] = 1 newL = 1 P = [-1, 0, 1, 0, 3, 4, -1] M = [-1, 6, 3, 4, 5] X[M] = [N/A, 1, 5, 6, 7] L = 4
Values stored in variables S k X[k]
Before for j loop S = [N/A, N/A, N/A, N/A] k = M[4] = 5 X[5] = 7
At end of loop j = 3 S = [N/A, N/A, N/A, 7] k = P[5] = 4 X[4] = 6
At end of loop j = 2 S = [N/A, N/A, 6, 7] k = P[4] = 3 X[3] = 5
At end of loop j = 1 S = [N/A, 5, 6, 7] k = P[3] = 0 X[0] = 2
At end of loop j = 0 S = [2, 5, 6, 7] k = P[0] = -1 X[-1] = N/A

Length bounds edit

According to the Erdős–Szekeres theorem, any sequence of   distinct integers has an increasing or a decreasing subsequence of length   [7][8] For inputs in which each permutation of the input is equally likely, the expected length of the longest increasing subsequence is approximately   [9][2]

In the limit as   approaches infinity, the Baik-Deift-Johansson theorem says, that the length of the longest increasing subsequence of a randomly permuted sequence of   items has a distribution approaching the Tracy–Widom distribution, the distribution of the largest eigenvalue of a random matrix in the Gaussian unitary ensemble.[10]

Online algorithms edit

The longest increasing subsequence has also been studied in the setting of online algorithms, in which the elements of a sequence of independent random variables with continuous distribution   – or alternatively the elements of a random permutation – are presented one at a time to an algorithm that must decide whether to include or exclude each element, without knowledge of the later elements. In this variant of the problem, which allows for interesting applications in several contexts, it is possible to devise an optimal selection procedure that, given a random sample of size   as input, will generate an increasing sequence with maximal expected length of size approximately   [11] The length of the increasing subsequence selected by this optimal procedure has variance approximately equal to   and its limiting distribution is asymptotically normal after the usual centering and scaling.[12] The same asymptotic results hold with more precise bounds for the corresponding problem in the setting of a Poisson arrival process.[13] A further refinement in the Poisson process setting is given through the proof of a central limit theorem for the optimal selection process which holds, with a suitable normalization, in a more complete sense than one would expect. The proof yields not only the "correct" functional limit theorem but also the (singular) covariance matrix of the three-dimensional process summarizing all interacting processes. [14]

See also edit

  • Anatoly Vershik – Russian mathematician (1933–2024) who studied applications of group theory to longest increasing subsequences in random permutations.[15]
  • Longest alternating subsequence
  • Longest common subsequence – Algorithmic problem on pairs of sequences
  • Patience sorting – Sorting algorithm − an efficient technique for finding the length of the longest increasing subsequence
  • Plactic monoid – monoid of positive integers modulo Knuth equivalence − an algebraic system defined by transformations that preserve the length of the longest increasing subsequence

References edit

  1. ^ Aldous, David; Diaconis, Persi (1999), "Longest increasing subsequences: from patience sorting to the Baik–Deift–Johansson theorem", Bulletin of the American Mathematical Society, 36 (4): 413–432, doi:10.1090/S0273-0979-99-00796-X.
  2. ^ a b Romik, Dan (2015). The Surprising Mathematics of Longest Increasing Subsequences. doi:10.1017/CBO9781139872003. ISBN 9781107075832.
  3. ^ a b Schensted, C. (1961), "Longest increasing and decreasing subsequences", Canadian Journal of Mathematics, 13: 179–191, doi:10.4153/CJM-1961-015-3, MR 0121305.
  4. ^ Hunt, J.; Szymanski, T. (1977), "A fast algorithm for computing longest common subsequences", Communications of the ACM, 20 (5): 350–353, doi:10.1145/359581.359603, S2CID 3226080.
  5. ^ Golumbic, M. C. (1980), Algorithmic Graph Theory and Perfect Graphs, Computer Science and Applied Mathematics, Academic Press, p. 159.
  6. ^ Fredman, Michael L. (1975), "On computing the length of longest increasing subsequences", Discrete Mathematics, 11 (1): 29–35, doi:10.1016/0012-365X(75)90103-X.
  7. ^ Erdős, Paul; Szekeres, George (1935), "A combinatorial problem in geometry", Compositio Mathematica, 2: 463–470.
  8. ^ Steele, J. Michael (1995), "Variations on the monotone subsequence theme of Erdős and Szekeres", in Aldous, David; Diaconis, Persi; Spencer, Joel; et al. (eds.), Discrete Probability and Algorithms (PDF), IMA Volumes in Mathematics and its Applications, vol. 72, Springer-Verlag, pp. 111–131.
  9. ^ Vershik, A. M.; Kerov, C. V. (1977), "Asymptotics of the Plancheral measure of the symmetric group and a limiting form for Young tableaux", Dokl. Akad. Nauk SSSR, 233: 1024–1027.
  10. ^ Baik, Jinho; Deift, Percy; Johansson, Kurt (1999), "On the distribution of the length of the longest increasing subsequence of random permutations", Journal of the American Mathematical Society, 12 (4): 1119–1178, arXiv:math/9810105, doi:10.1090/S0894-0347-99-00307-0.
  11. ^ Samuels, Stephen. M.; Steele, J. Michael (1981), "Optimal Sequential Selection of a Monotone Sequence From a Random Sample" (PDF), Annals of Probability, 9 (6): 937–947, doi:10.1214/aop/1176994265, archived (PDF) from the original on July 30, 2018
  12. ^ Arlotto, Alessandro; Nguyen, Vinh V.; Steele, J. Michael (2015), "Optimal online selection of a monotone subsequence: a central limit theorem", Stochastic Processes and Their Applications, 125 (9): 3596–3622, arXiv:1408.6750, doi:10.1016/j.spa.2015.03.009, S2CID 15900488
  13. ^ Bruss, F. Thomas; Delbaen, Freddy (2001), "Optimal rules for the sequential selection of monotone subsequences of maximum expected length", Stochastic Processes and Their Applications, 96 (2): 313–342, doi:10.1016/S0304-4149(01)00122-3.
  14. ^ Bruss, F. Thomas; Delbaen, Freddy (2004), "A central limit theorem for the optimal selection process for monotone subsequences of maximum expected length", Stochastic Processes and Their Applications, 114 (2): 287–311, doi:10.1016/j.spa.2004.09.002.
  15. ^ Romik, Dan (2015). The surprising mathematics of longest increasing subsequences. Institute of Mathematical Statistics Textbooks. New York: Cambridge University Press. ISBN 978-1-107-42882-9.

External links edit