Dependency relation

In computer science, in particular in concurrency theory, a dependency relation is a binary relation that is finite,[1]:4 symmetric, and reflexive;[1]:6 i.e. a finite tolerance relation. That is, it is a finite set of ordered pairs ${\displaystyle D}$, such that

• If ${\displaystyle (a,b)\in D}$ then ${\displaystyle (b,a)\in D}$ (symmetric)
• If ${\displaystyle a}$ is an element of the set on which the relation is defined, then ${\displaystyle (a,a)\in D}$ (reflexive)

In general, dependency relations are not transitive; thus, they generalize the notion of an equivalence relation by discarding transitivity.

If ${\displaystyle \Sigma }$ denotes the alphabet on which ${\displaystyle D}$ is defined, then the independency induced by ${\displaystyle D}$ is the binary relation ${\displaystyle I}$

${\displaystyle I=(\Sigma \times \Sigma )\setminus D}$

That is, the independency is the set of all ordered pairs that are not in ${\displaystyle D}$. The independency relation is symmetric and irreflexive. Conversely, given any symmetric and irreflexive relation ${\displaystyle I}$ on a finite alphabet, the relation

${\displaystyle D=(\Sigma \times \Sigma )\setminus I}$

is a dependency relation.

The pair ${\displaystyle (\Sigma ,D)}$ is called the concurrent alphabet.[2]:6 The pair ${\displaystyle (\Sigma ,I)}$ is called the independency alphabet or reliance alphabet, but this term may also refer to the triple ${\displaystyle (\Sigma ,D,I)}$ (with ${\displaystyle I}$ induced by ${\displaystyle D}$).[3]:6 Elements ${\displaystyle x,y\in \Sigma }$ are called dependent if ${\displaystyle xDy}$ holds, and independent, else (i.e. if ${\displaystyle xIy}$ holds).[1]:6

Given a reliance alphabet ${\displaystyle (\Sigma ,D,I)}$, a symmetric and irreflexive relation ${\displaystyle \doteq }$ can be defined on the free monoid ${\displaystyle \Sigma ^{*}}$ of all possible strings of finite length by: ${\displaystyle xaby\doteq xbay}$ for all strings ${\displaystyle x,y\in \Sigma ^{*}}$ and all independent symbols ${\displaystyle a,b\in I}$. The equivalence closure of ${\displaystyle \doteq }$ is denoted ${\displaystyle \equiv }$ or ${\displaystyle \equiv _{(\Sigma ,D,I)}}$ and called ${\displaystyle (\Sigma ,D,I)}$-equivalence. Informally, ${\displaystyle p\equiv q}$ holds if the string ${\displaystyle p}$ can be transformed into ${\displaystyle q}$ by a finite sequence of swaps of adjacent independent symbols. The equivalence classes of ${\displaystyle \equiv }$ are called traces,[1]:7–8 and are studied in trace theory.

Examples

Given the alphabet ${\displaystyle \Sigma =\{a,b,c\}}$ , a possible dependency relation is ${\displaystyle D=\{(a,b),\,(b,a),\,(a,c),\,(c,a),\,(a,a),\,(b,b),\,(c,c)\}}$ , see picture.

The corresponding independency is ${\displaystyle I=\{(b,c),\,(c,b)\}}$ . Then e.g. the symbols ${\displaystyle b,c}$  are independent of one another, and e.g. ${\displaystyle a,b}$  are dependent. The string ${\displaystyle acbba}$  is equivalent to ${\displaystyle abcba}$  and to ${\displaystyle abbca}$ , but to no other string.

References

1. ^ a b c d IJsbrand Jan Aalbersberg and Grzegorz Rozenberg (Mar 1988). "Theory of traces". Theoretical Computer Science. 60 (1): 1–82. doi:10.1016/0304-3975(88)90051-5.
2. ^ Vasconcelos, Vasco Thudichum (1992). Trace semantics for concurrent objects (MsC thesis). Keio University. CiteSeerX 10.1.1.47.7099.
3. ^ Mazurkiewicz, Antoni (1995). "Introduction to Trace Theory" (PDF). In Rozenberg, G.; Diekert, V. (eds.). The Book of Traces. Singapore: World Scientific. ISBN 981-02-2058-8. Retrieved 18 April 2021.