# Point process

In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on some underlying mathematical space such as the real line, the Cartesian plane, or more abstract spaces. Point processes can be used as mathematical models of phenomena or objects representable as points in some type of space.

There are different mathematical interpretations of a point process, such as a random counting measure or a random set. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear. Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space[a] on which it is defined, such as the real line or $n$ -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes. Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field.

Point processes are well studied objects in probability theory and the subject of powerful tools in statistics for modeling and analyzing spatial data, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.

Point processes on the real line form an important special case that is particularly amenable to study, because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in a Geiger counter, location of radio stations in a telecommunication network or of searches on the world-wide web.

## General point process theory

In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points.[clarification needed]

### Definition

To define general point processes, we start with a probability space $(\Omega ,{\mathcal {F}},P)$ , and a measurable space $(S,{\mathcal {S}})$  where $S$  is a locally compact second countable Hausdorff space and ${\mathcal {S}}$  is its Borel σ-algebra. Consider now an integer-valued locally finite kernel $\xi$  from $(\Omega ,{\mathcal {F}})$  into $(S,{\mathcal {S}})$ , that is, a mapping $\Omega \times {\mathcal {S}}\mapsto \mathbb {Z} _{+}$  such that:

1. For every $\omega \in \Omega$ , $\xi (\omega ,\cdot )$  is a locally finite measure on $S$ .
2. For every $B\in {\mathcal {S}}$ , $\xi (\cdot ,B):\Omega \mapsto \mathbb {Z} _{+}$  is a random variable over $\mathbb {Z} _{+}$ .

This kernel defines a random measure in the following way. We would like to think of $\xi$  as defining a mapping which maps $\omega \in \Omega$  to a measure $\xi _{\omega }\in {\mathcal {M}}({\mathcal {S}})$  (namely, $\Omega \mapsto {\mathcal {M}}({\mathcal {S}})$ ), where ${\mathcal {M}}({\mathcal {S}})$  is the set of all locally finite measures on $S$ . Now, to make this mapping measurable, we need to define a $\sigma$ -field over ${\mathcal {M}}({\mathcal {S}})$ . This $\sigma$ -field is constructed as the minimal algebra so that all evaluation maps of the form $\pi _{B}:\mu \mapsto \mu (B)$ , where $B\in {\mathcal {S}}$  is relatively compact), are measurable. Equipped with this $\sigma$ -field, then $\xi$  is a random element, where for every $\omega \in \Omega$ , $\xi _{\omega }$  is a locally finite measure over $S$ .

Now, by a point process on $S$  we simply mean an integer-valued random measure (or equivalently, integer-valued kernel) $\xi$  constructed as above. The most common example for the state space S is the Euclidean space Rn or a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets of Rn, in which case ξ is usually referred to as a particle process.

It has been noted[citation needed] that the term point process is not a very good one if S is not a subset of the real line, as it might suggest that ξ is a stochastic process. However, the term is well established and uncontested even in the general case.

### Representation

Every instance (or event) of a point process ξ can be represented as

$\xi =\sum _{i=1}^{n}\delta _{X_{i}},$

where $\delta$  denotes the Dirac measure, n is an integer-valued random variable and $X_{i}$  are random elements of S. If $X_{i}$ 's are almost surely distinct (or equivalently, almost surely $\xi (x)\leq 1$  for all $x\in \mathbb {R} ^{d}$ ), then the point process is known as simple.

Another different but useful representation of an event (an event in the event space, i.e. a series of points) is the counting notation, where each instance is represented as an $N(t)$  function, a continuous function which takes integer values: $N:{\mathbb {R} }\rightarrow {\mathbb {Z} ^{+}}$ :

$N(t_{1},t_{2})=\int _{t_{1}}^{t_{2}}\xi (t)dt$

which is the number of events in the observation interval $(t_{1},t_{2}]$ . It is sometimes shown as $N_{t_{1},t_{2}}$  and $N_{T}$  or $N(T)$  means $N_{0,T}$ .

### Expectation measure

The expectation measure (also known as mean measure) of a point process ξ is a measure on S that assigns to every Borel subset B of S the expected number of points of ξ in B. That is,

$E\xi (B):=E{\bigl (}\xi (B){\bigr )}\quad {\text{for every }}B\in {\mathcal {B}}.$

### Laplace functional

The Laplace functional $\Psi _{N}(f)$  of a point process N is a map from the set of all positive valued functions f on the state space of N, to $[0,\infty )$  defined as follows:

$\Psi _{N}(f)=E[\exp(-N(f))]$

They play a similar role as the characteristic functions for random variable. One important theorem says that: two point processes have the same law if their Laplace functionals are equal.

### Moment measure

The $n$ th power of a point process, $\xi ^{n},$  is defined on the product space $S^{n}$  as follows :

$\xi ^{n}(A_{1}\times \cdots \times A_{n})=\prod _{i=1}^{n}\xi (A_{i})$

By monotone class theorem, this uniquely defines the product measure on $(S^{n},B(S^{n})).$  The expectation $E\xi ^{n}(\cdot )$  is called the $n$  th moment measure. The first moment measure is the mean measure.

Let $S=\mathbb {R} ^{d}$  . The joint intensities of a point process $\xi$  w.r.t. the Lebesgue measure are functions $\rho ^{(k)}:(\mathbb {R} ^{d})^{k}\to [0,\infty )$  such that for any disjoint bounded Borel subsets $B_{1},\ldots ,B_{k}$

$E\left(\prod _{i}\xi (B_{i})\right)=\int _{B_{1}\times \cdots \times B_{k}}\rho ^{(k)}(x_{1},\ldots ,x_{k})\,dx_{1}\cdots dx_{k}.$

Joint intensities do not always exist for point processes. Given that moments of a random variable determine the random variable in many cases, a similar result is to be expected for joint intensities. Indeed, this has been shown in many cases.

### Stationarity

A point process $\xi \subset \mathbb {R} ^{d}$  is said to be stationary if $\xi +x:=\sum _{i=1}^{N}\delta _{X_{i}+x}$  has the same distribution as $\xi$  for all $x\in \mathbb {R} ^{d}.$  For a stationary point process, the mean measure $E\xi (\cdot )=\lambda \|\cdot \|$  for some constant $\lambda \geq 0$  and where $\|\cdot \|$  stands for the Lebesgue measure. This $\lambda$  is called the intensity of the point process. A stationary point process on $\mathbb {R} ^{d}$  has almost surely either 0 or an infinite number of points in total. For more on stationary point processes and random measure, refer to Chapter 12 of Daley & Vere-Jones. Stationarity has been defined and studied for point processes in more general spaces than $\mathbb {R} ^{d}$ .

## Examples of point processes

We shall see some examples of point processes in $\mathbb {R} ^{d}.$

### Poisson point process

The simplest and most ubiquitous example of a point process is the Poisson point process, which is a spatial generalisation of the Poisson process. A Poisson (counting) process on the line can be characterised by two properties : the number of points (or events) in disjoint intervals are independent and have a Poisson distribution. A Poisson point process can also be defined using these two properties. Namely, we say that a point process $\xi$  is a Poisson point process if the following two conditions hold

1) $\xi (B_{1}),\ldots ,\xi (B_{n})$  are independent for disjoint subsets $B_{1},\ldots ,B_{n}.$

2) For any bounded subset $B$ , $\xi (B)$  has a Poisson distribution with parameter $\lambda \|B\|,$  where $\|\cdot \|$  denotes the Lebesgue measure.

The two conditions can be combined together and written as follows : For any disjoint bounded subsets $B_{1},\ldots ,B_{n}$  and non-negative integers $k_{1},\ldots ,k_{n}$  we have that

$\Pr[\xi (B_{i})=k_{i},1\leq i\leq n]=\prod _{i}e^{-\lambda \|B_{i}\|}{\frac {(\lambda \|B_{i}\|)^{k_{i}}}{k_{i}!}}.$

The constant $\lambda$  is called the intensity of the Poisson point process. Note that the Poisson point process is characterised by the single parameter $\lambda .$  It is a simple, stationary point process. To be more specific one calls the above point process, a homogeneous Poisson point process. An inhomogeneous Poisson process is defined as above but by replacing $\lambda \|B\|$  with ${\stackrel {}{}}\int _{B}\lambda (x)\,dx$  where $\lambda$  is a non-negative function on $\mathbb {R} ^{d}.$

### Cox point process

A Cox process (named after Sir David Cox) is a generalisation of the Poisson point process, in that we use random measures in place of $\lambda \|B\|$ . More formally, let $\Lambda$  be a random measure. A Cox point process driven by the random measure $\Lambda$  is the point process $\xi$  with the following two properties :

1. Given $\Lambda (\cdot )$ , $\xi (B)$  is Poisson distributed with parameter $\Lambda (B)$  for any bounded subset $B.$
2. For any finite collection of disjoint subsets $B_{1},\ldots ,B_{n}$  and conditioned on $\Lambda (B_{1}),\ldots ,\Lambda (B_{n}),$  we have that $\xi (B_{1}),\ldots ,\xi (B_{n})$  are independent.

It is easy to see that Poisson point process (homogeneous and inhomogeneous) follow as special cases of Cox point processes. The mean measure of a Cox point process is $E\xi (\cdot )=E\Lambda (\cdot )$  and thus in the special case of a Poisson point process, it is $\lambda \|\cdot \|.$

For a Cox point process, $\Lambda (\cdot )$  is called the intensity measure. Further, if $\Lambda (\cdot )$  has a (random) density (Radon–Nikodym derivative) $\lambda (\cdot )$  i.e.,

$\Lambda (B){\stackrel {\text{a.s.}}{=}}\int _{B}\lambda (x)\,dx,$

then $\lambda (\cdot )$  is called the intensity field of the Cox point process. Stationarity of the intensity measures or intensity fields imply the stationarity of the corresponding Cox point processes.

There have been many specific classes of Cox point processes that have been studied in detail such as:

• Log Gaussian Cox point processes: $\lambda (y)=\exp(X(y))$  for a Gaussian random field $X(.)$
• Shot noise Cox point processes:, $\lambda (y)=\sum _{X\in \Phi }h(X,y)$  for a Poisson point process $\Phi (\cdot )$  and kernel $h(\cdot ,\cdot )$
• Generalised shot noise Cox point processes: $\lambda (y)=\sum _{X\in \Phi }h(X,y)$  for a point process $\Phi (\cdot )$  and kernel $h(.,.)$
• Lévy based Cox point processes: $\lambda (y)=\int h(x,y)L(dx)$  for a Lévy basis $L(\cdot )$  and kernel $h(.,.)$ , and
• Permanental Cox point processes: $\lambda (y)=X_{1}^{2}(y)+\cdots +X_{k}^{2}(y)$  for k independent Gaussian random fields $X_{i}(\cdot )$ 's
• Sigmoidal Gaussian Cox point processes: $\lambda (y)=\lambda ^{\star }/(1+\exp(-X(y)))$  for a Gaussian random field $X(\cdot )$  and random $\lambda ^{\star }>0$

By Jensen's inequality, one can verify that Cox point processes satisfy the following inequality: for all bounded Borel subsets $B$ ,

$\operatorname {Var} (\xi (B))\geq \operatorname {Var} (\xi _{\alpha }(B)),$

where $\xi _{\alpha }$  stands for a Poisson point process with intensity measure $\alpha (\cdot ):=E\xi (\cdot )=E\Lambda (\cdot ).$  Thus points are distributed with greater variability in a Cox point process compared to a Poisson point process. This is sometimes called clustering or attractive property of the Cox point process.

### Determinantal point processes

An important class of point processes, with applications to physics, random matrix theory, and combinatorics, is that of determinantal point processes.

### Hawkes (self-exciting) processes

A Hawkes process $N_{t}$ , also known as a self-exciting counting process, is a simple point process whose conditional intensity can be expressed as

${\begin{array}{ll}\lambda (t)&=\mu (t)+\int _{-\infty }^{t}\nu (t-s)dN_{s}\\&=\mu (t)+\sum _{T_{k}

where $\nu :\mathbb {R} ^{+}\rightarrow \mathbb {R} ^{+}$  is a kernel function which expresses the positive influence of past events $T_{i}$  on the current value of the intensity process $\lambda (t)$ , $\mu (t)$  is a possibly non-stationary function representing the expected, predictable, or deterministic part of the intensity, and $\{T_{i}:T_{i}  is the time of occurrence of the i-th event of the process.[citation needed]

### Geometric processes

Given a sequence of non-negative random variables :$\{X_{k},k=1,2,\dots \}$ , if they are independent and the cdf of $X_{k}$  is given by $F(a^{k-1}x)$  for $k=1,2,\dots$ , where $a$  is a positive constant, then $\{X_{k},k=1,2,\ldots \}$  is called a geometric process (GP) .

The geometric process has several extensions, including the α- series process and the doubly geometric process .

## Point processes on the real half-line

Historically the first point processes that were studied had the real half line R+ = [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems, in which the points represented events in time, such as calls to a telephone exchange.

Point processes on R+ are typically described by giving the sequence of their (random) inter-event times (T1T2, ...), from which the actual sequence (X1X2, ...) of event times can be obtained as

$X_{k}=\sum _{j=1}^{k}T_{j}\quad {\text{for }}k\geq 1.$

If the inter-event times are independent and identically distributed, the point process obtained is called a renewal process.

### Intensity of a point process

The intensity λ(t | Ht) of a point process on the real half-line with respect to a filtration Ht is defined as

$\lambda (t\mid H_{t})=\lim _{\Delta t\to 0}{\frac {1}{\Delta t}}\Pr({\text{One event occurs in the time-interval}}\,[t,t+\Delta t]\mid H_{t}),$

Ht can denote the history of event-point times preceding time t but can also correspond to other filtrations (for example in the case of a Cox process).

In the $N(t)$ -notation, this can be written in a more compact form: $\lambda (t\mid H_{t})=\lim _{\Delta t\to 0}{\frac {1}{\Delta t}}\Pr(N(t+\Delta t)-N(t)=1\mid H_{t})$ .

The compensator of a point process, also known as the dual-predictable projection, is the integrated conditional intensity function defined by

$\Lambda ^{}(s_{},u)=\int _{s}^{u}\lambda ^{}(t|H_{t})\mathrm {d} t$

## Related functions

### Papangelou intensity function

The Papangelou intensity function of a point process $N$  in the $n$ -dimensional Euclidean space $\mathbb {R} ^{n}$  is defined as

$\lambda _{p}(x)=\lim _{\delta \to 0}{\frac {1}{|B_{\delta }(x)|}}{P}\{{\text{One event occurs in }}\,B_{\delta }(x)\mid \sigma [N(\mathbb {R} ^{n}\setminus B_{\delta }(x))]\},$

where $B_{\delta }(x)$  is the ball centered at $x$  of a radius $\delta$ , and $\sigma [N(\mathbb {R} ^{n}\setminus B_{\delta }(x))]$  denotes the information of the point process $N$  outside $B_{\delta }(x)$ .

### Likelihood function

The logarithmic likelihood of a parameterized simple point process conditional upon some observed data is written as

$\ln {\mathcal {L}}(N(t)_{t\in [0,T]})=\int _{0}^{T}(1-\lambda (s))ds+\int _{0}^{T}\ln \lambda (s)dN_{s}$ 

## Point processes in spatial statistics

The analysis of point pattern data in a compact subset S of Rn is a major object of study within spatial statistics. Such data appear in a broad range of disciplines, amongst which are

• forestry and plant ecology (positions of trees or plants in general)
• epidemiology (home locations of infected patients)
• zoology (burrows or nests of animals)
• geography (positions of human settlements, towns or cities)
• seismology (epicenters of earthquakes)
• materials science (positions of defects in industrial materials)
• astronomy (locations of stars or galaxies)
• computational neuroscience (spikes of neurons).

The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibit complete spatial randomness (i.e. are a realization of a spatial Poisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition.

In contrast, many datasets considered in classical multivariate statistics consist of independently generated datapoints that may be governed by one or several covariates (typically non-spatial).

Apart from the applications in spatial statistics, point processes are one of the fundamental objects in stochastic geometry. Research has also focussed extensively on various models built on point processes such as Voronoi Tessellations, Random geometric graphs, Boolean model etc.