# Complexity class

In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:

the set of problems that can be solved by an abstract machine M using O(f(n)) of resource R, where n is the size of the input.

## Background

Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases. It is an abstract measurement, and does not give time or space in requirements in terms of seconds or bytes, which would require knowledge of implementation specifics. The function inside the O(...) expression could be a constant, for algorithms which are unaffected by the size of n, or an expression involving a logarithm, an expression involving a power of n, i.e. a polynomial expression, and many others. The O is read as "order of..". For the purposes of computational complexity theory, some of the details of the function can be ignored, for instance many possible polynomials can be grouped together as a class.

The resource in question can either be time, essentially the number of primitive operations on an abstract machine, or (storage) space. For example, the class NP is the set of decision problems whose solutions can be determined by a non-deterministic Turing machine in polynomial time, while the class PSPACE is the set of decision problems that can be solved by a deterministic Turing machine in polynomial space.

### Characterization

The simplest complexity classes are defined by the type of computational problem, the model of computation, and the resource (or resources) that are being bounded and the bounds. The resource and bounds are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc.

Many complexity classes can be characterized in terms of the mathematical logic needed to express them; see descriptive complexity.

#### Computational problem

The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems (an example is FP), counting problems (e.g. #P), optimization problems, promise problems, etc.

#### Model of computation

The most common model of computation is the deterministic Turing machine, but many complexity classes are based on nondeterministic Turing machines, boolean circuits, quantum Turing machines, monotone circuits, etc.

#### Resource bounds

Bounding the computation time above by some concrete function f(n) often yields complexity classes that depend on the chosen machine model. For instance, the language {xx | x is any binary string} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham–Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP.

The Blum axioms can be used to define complexity classes without referring to a concrete computational model.

## Common complexity classes

ALL is the class of all decision problems. Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:

### Time-complexity classes

Model of computation Time constraint f(n) Time constraint poly(n) Time constraint 2poly(n)
Deterministic Turing machine DTIME P EXPTIME
Non-deterministic Turing machine NTIME NP NEXPTIME

### Space-complexity classes

Model of computation Space constraint f(n) Space constraint O(log n) Space constraint poly(n) Space constraint 2poly(n)
Deterministic Turing machine DSPACE L PSPACE EXPSPACE
Non-deterministic Turing machine NSPACE NL NPSPACE NEXPSPACE

### Other models of computation

#### Probabilistic model of computation

Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines.

#### Boolean circuit models

The classes AC and NC are defined using Boolean circuits.

#### Quantum Turing machines

The classes BQP and QMA, which are of key importance in quantum information science, are defined using quantum Turing machines.

#### Counting problems

#P is an important complexity class of counting problems (not decision problems).

#### Interactive proof models

Classes like IP and AM are defined using Interactive proof systems.

### Enumeration algorithms

Several output-sensitive classes have been defined for enumeration algorithms.

## Reduction

Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at least as difficult as another problem. For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult than Y, and we say that X reduces to Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions.

The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.

This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems.

If a problem X is in C and is hard for C, then X is said to be complete for C. This means that X is the hardest problem in C (Since there could be many problems which are equally hard, one might say that X is one of the hardest problems in C). Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.

## Closure properties of classes

Complexity classes have a variety of closure properties; for example, decision classes may be closed under negation, disjunction, conjunction, or even under all Boolean operations. Moreover, they might also be closed under a variety of quantification schemes. P, for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. However, it is most likely not closed under quantification over exponential sized domains.

Each class X that is not closed under negation has a complement class co-Y, which consists of the complements of the languages contained in X. Similarly one can define the Boolean closure of a class, and so on; this is however less commonly done.

One possible route to separating two complexity classes is to find some closure property possessed by one and not by the other.

## Relationships between complexity classes

### Savitch's theorem

Savitch's theorem establishes that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE. One central question of complexity theory is whether nondeterminism adds significant power to a computational model. This is central to the open P versus NP problem in the context of time. Savitch's theorem shows that for space, nondeterminism does not add significantly more power, where "significant" means the difference between polynomial and superpolynomial resource requirements (or, for EXPSPACE, the difference between exponential and superexponential). For example, Savitch's theorem proves that no problem that requires exponential space for a deterministic Turing machine can be solved by a nondeterministic polynomial space Turing machine.

### Other relations

The following table shows some of the classes of problems (or languages, or grammars) that are considered in complexity theory. If class X is a strict subset of Y, then X is shown below Y, with a dark line connecting them. If X is a subset, but it is unknown whether they are equal sets, then the line is lighter and is dotted. Technically, the breakdown into decidable and undecidable pertains more to the study of computability theory but is useful for putting the complexity classes in perspective.

 Decision Problem

 Type 0 (Recursively enumerable)
 Undecidable

 Decidable

 EXPSPACE

 NEXPTIME

 EXPTIME

 PSPACE

 Type 1 (Context Sensitive)

 co-NP
 BQP
 NP

 BPP

 P

 NC

 Type 2 (Context Free)

 Type 3 (Regular)

### Hierarchy theorems

For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.

More precisely, the time hierarchy theorem states that

${\displaystyle \mathbf {DTIME} {\big (}f(n){\big )}\subsetneq \mathbf {DTIME} {\big (}f(n)\cdot \log ^{2}(f(n)){\big )}}$ .

The space hierarchy theorem states that

${\displaystyle \mathbf {DSPACE} {\big (}f(n){\big )}\subsetneq \mathbf {DSPACE} {\big (}f(n)\cdot \log(f(n)){\big )}}$ .

The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.