# Cell-probe model

In computer science, the cell-probe model is a model of computation similar to the random-access machine, except that all operations are free except memory access. This model is useful for proving lower bounds of algorithms for data structure problems.

## Overview

The cell-probe model is a minor modification of the random-access machine model, itself a minor modification of the counter machine model, in which computational cost is only assigned to accessing units of memory called cells.

In this model, computation is framed as a problem of querying a set of memory cells. The problem has two phases: the preprocessing phase and the query phase. The input to the first phase, the preprocessing phase, is a set of data from which to build some structure from memory cells. The input to the second phase, the query phase, is a query datum. The problem is to determine if the query datum was included in the original input data set. Operations are free except to access memory cells.

This model is useful in the analysis of data structures. In particular, the model clearly shows a minimum number of memory accesses to solve a problem in which there is stored data on which we would like to run some query. An example of such a problem is the dynamic partial sum problem.[1][2]

## History

In Andrew Yao's 1981 paper "Should Tables Be Sorted?",[3] Yao described the cell-probe model and used it to give a minimum number of memory cell "probes" or accesses necessary to determine whether a given query datum exists within a table stored in memory.

## Formal definition

Given a set of data ${\displaystyle S}$  construct a structure consisting of ${\displaystyle c}$  memory cells, each able to store ${\displaystyle w}$  bits. Then when given a query element ${\displaystyle s}$  determine whether ${\displaystyle s\in S}$  with correctness ${\displaystyle 1-\varepsilon }$  by accessing ${\displaystyle t}$  memory cells. Such an algorithm is called an ${\displaystyle \varepsilon }$ -error ${\displaystyle t}$ -probe algorithm using ${\displaystyle c}$  cells with word size ${\displaystyle w}$ . [4]

## Notable results

### Dynamic Partial Sums

The dynamic partial sum problem defines two operations Update${\displaystyle (k,v)}$  which conceptually operation sets the value in an array ${\displaystyle A}$  at index ${\displaystyle k}$  to be ${\displaystyle v}$ , and Sum${\displaystyle (k)}$  which returns the sum of the values in ${\displaystyle A}$  at indices ${\displaystyle 0}$  through ${\displaystyle k}$ . Such an implementation would take ${\displaystyle O(1)}$  time for Update and ${\displaystyle O(n)}$  time for Sum.[5]

Instead, if the values are stored as leaves in a tree whose inner nodes store the values of the subtree rooted at that node. In this structure Update requires ${\displaystyle O(\log n)}$  time to update each node in the leaf to root path, and Sum similarly requires ${\displaystyle O(\log n)}$  time to traverse the tree from leaf to root summing the values of all subtrees left of the query index.

Mihai Pătraşcu used the cell-probe model and an information transfer argument to show that the partial sums problem requires ${\displaystyle \Omega \left(\log n\right)}$  time per operation.[1][2]

### Approximate Nearest Neighbor Searching

The exact nearest neighbor search problem is to determine the closest in a set of input points to a given query point. An approximate version of this problem is often considered since many applications of this problem are in very high dimension spaces and solving the problem in high dimensions requires exponential time or space with respect to the dimension.[4]

Chakrabarti and Regev proved that the approximate nearest neighbor search problem on the Hamming cube using polynomial storage and ${\displaystyle d^{O(1)}}$  word size requires a worst-case query time of ${\displaystyle \Omega \left({\frac {\log \log d}{\log \log \log d}}\right)}$ . This proof used the cell-probe model and information theoretic techniques for communication complexity.