Perceptual control theory
Perceptual control theory (PCT) is a model of behavior based on the principles of negative feedback, but differing in important respects from engineering control theory. Results of PCT experiments have demonstrated that an organism controls neither its own behavior, nor external environmental variables, but rather its own perceptions of those variables. Actions are not controlled, they are varied so as to cancel the effects that unpredictable environmental disturbances would otherwise have on controlled perceptions. As a catch-phrase of the field puts it, "behavior is the control of perception." PCT demonstrates circular causation in a negative feedback loop closed through the environment. This fundamentally contradicts the classical notion of linear causation of behavior by stimuli, in which environmental stimuli are thought to cause behavioral responses, mediated (according to cognitive psychology) by intervening cognitive processes.
Numerous computer simulations of specific behavioral situations demonstrate its efficacy, with extremely high correlations to observational data (0.95 or better), which are vanishingly rare in the so-called 'soft' sciences. While the adoption of PCT in the scientific community has not been widespread, it has been applied not only in experimental psychology and neuroscience, but also in sociology, linguistics, and a number of other fields, and has led to a method of psychotherapy called the method of levels.
The place of purpose (intention) and causation in psychologyEdit
A tradition from Aristotle through William James recognizes that behavior is purposeful rather than merely reactive. However, the only evidence for intentions was subjective. Behaviorists following Wundt, Thorndyke, Watson, and others rejected introspective reports as data for an objective science of psychology. Only observable behavior could be admitted as data.
There follows from this stance the assumption that environmental events (stimuli) cause behavioral actions (responses). This assumption persists in cognitive psychology, which interposes cognitive maps and other postulated information processing between stimulus and response, but otherwise retains the assumption of linear causation from environment to behavior.
Another, more specific reason for psychologists' rejecting notions of purpose or intention was that they could not see how a goal (a state that did not yet exist) could cause the behavior that led to it. PCT resolves these philosophical arguments about teleology because it provides a model of the functioning of organisms in which purpose has objective status without recourse to introspection, and in which causation is circular around feedback loops.
PCT has roots in insights of Claude Bernard and 20th century control systems engineering and cybernetics. It was originated as such, and given its present form and experimental methodology, by William T. Powers.
Powers recognized that to be purposeful implies control, and that the concepts and methods of engineered control systems could be applied to biological control systems. A key insight is that the variable that is controlled is not the output of the system (the behavioral actions), but its input, that is, a sensed and transformed function of some state of the environment that could be affected by the control system's output. Because some of these sensed and transformed inputs appear as consciously perceived aspects of the environment, Powers labelled the controlled variable "perception". The theory came to be known as "Perceptual Control Theory" or PCT rather than "Control Theory Applied to Psychology" because control theorists often assert or assume that it is the system's output that is controlled. In PCT it is the internal representation of the state of some variable in the environment—a "perception" in everyday language—that is controlled. The basic principles of PCT were first published by Powers, Clark, and MacFarland as a "general feedback theory of behavior" in 1960, with credits to cybernetic authors Wiener and Ashby, and has been systematically developed since then in the research community that has gathered around it. Initially, it received little general recognition, but is now better known.
A simple negative feedback control system is a cruise control system for a car. A cruise control system has a sensor which "perceives" speed as the rate of spin of the drive shaft directly connected to the wheels. It also has a driver-adjustable 'goal' specifying a particular speed. The sensed speed is continuously compared against the specified speed by a device (called a "comparator") which subtracts the currently sensed input value from the stored goal value. The difference (the error signal) determines the throttle setting (the accelerator depression), so that the engine output is continuously varied to prevent the speed of the car from increasing or decreasing from that desired speed as environmental conditions change. This type of classical negative feedback control was worked out by engineers in the 1930s and 1940s.
If the speed of the car starts to drop below the goal-speed, for example when climbing a hill, the small increase in the error signal, amplified, causes engine output to increase, which keeps the error very nearly at zero. If the speed begins to exceed the goal, e.g. when going down a hill, the engine is throttled back so as to act as a brake, so again the speed is kept from departing more than a barely detectable amount from the goal speed (brakes being needed only if the hill is too steep). The result is that the cruise control system maintains a speed close to the goal as the car goes up and down hills, and as other disturbances such as wind affect the car's speed. This is all done without any planning of specific actions, and without any blind reactions to stimuli. Indeed, the cruise control system does not sense disturbances such as wind pressure at all, it only senses the controlled variable, speed. Nor does it control the power generated by the engine, it uses the 'behavior' of engine power as its means to control the sensed speed.
The same principles of negative feedback control (including the ability to nullify effects of unpredictable external or internal disturbances) apply to living control systems. The thesis of PCT is that animals and people do not control their behavior; rather, they vary their behavior as their means for controlling their perceptions, with or without external disturbances. This directly contradicts the historical and still widespread assumption that behavior is the final result of stimulus inputs and cognitive plans.
The methodology of modeling, and PCT as modelEdit
The principal datum in PCT methodology is the controlled variable. The fundamental step of PCT research, the test for controlled variables, begins with the slow and gentle application of disturbing influences to the state of a variable in the environment which the researcher surmises is already under control by the observed organism. It is essential not to overwhelm the organism's ability to control, since that is what is being investigated. If the organism changes its actions just so as to prevent the disturbing influence from having the expected effect on that variable, that is strong evidence that the experimental action disturbed a controlled variable. It is crucially important to distinguish the perceptions and point of view of the observer from those of the observed organism. It may take a number of variations of the test to isolate just which aspect of the environmental situation is under control, as perceived by the observed organism.
PCT employs a black box methodology. The controlled variable as measured by the observer corresponds quantitatively to a reference value for a perception that the organism is controlling. The controlled variable is thus an objective index of the purpose or intention of those particular behavioral actions by the organism—the goal which those actions consistently work to attain despite disturbances. With few exceptions, in the current state of neuroscience this internally maintained reference value is seldom directly observed as such (e.g. as a rate of firing in a neuron), since few researchers trace the relevant electrical and chemical variables by their specific pathways while a living organism is engaging in what we externally observe as behavior. However, when a working negative feedback system simulated on a digital computer performs essentially identically to observed organisms, then the well understood negative feedback structure of the simulation or model (the white box) is understood to demonstrate the unseen negative feedback structure within the organism (the black box).
Data for individuals are not aggregated for statistical analysis; instead, a generative model is built which replicates the data observed for individuals with very high fidelity (0.95 or better). To build such a model of a given behavioral situation requires careful measurements of three observed variables:
|qi||The input quantity, that aspect of the stimulus which the subject perceives and has been demonstrated to be controlling.|
|qo||The output quantity, that aspect of the subject's behavior which affects the state of qi.|
|d||The disturbance, a value summing the effects that any other influences in the environment have on the state of qi. In a controlled experiment one aims to have just one disturbing influence that is under the control of the investigator, but in naturalistic observation the situation is frequently more complex.|
A fourth value, the internally maintained reference r (a variable ′setpoint′), is deduced from the value at which the organism is observed to maintain qi, as determined by the test for controlled variables (described at the beginning of this section).
With two variables specified, the controlled input qi and the reference r, a properly designed control system, simulated on a digital computer, produces outputs qo that almost precisely oppose unpredictable disturbances d to the controlled input. Further, the variance from perfect control accords well with that observed for living organisms. Perfect control would result in zero effect of the disturbance, but living organisms are not perfect controllers, and the aim of PCT is to model living organisms. When a computer simulation performs with >95% conformity to experimentally measured values, opposing the effect of unpredictable changes in d by generating (nearly) equal and opposite values of qo, it is understood to model the behavior and the internal control-loop structure of the organism.
By extension, the elaboration of the theory constitutes a general model of cognitive process and behavior. With every specific model or simulation of behavior that is constructed and tested against observed data, the general model that is presented in the theory is exposed to potential challenge that could call for revision or could lead to refutation.
To illustrate the mathematical calculations employed in a PCT simulation, consider a pursuit tracking task in which the participant keeps a mouse cursor aligned with a moving target on a computer monitor.
The model assumes that a perceptual signal within the participant represents the magnitude of the input quantity qi. (This has been demonstrated to be a rate of firing in a neuron, at least at the lowest levels.) In the tracking task, the input quantity is the vertical distance between the target position T and the cursor position C, and the random variation of the target position acts as the disturbance d of that input quantity. This suggests that the perceptual signal p quantitatively represents the cursor position C minus the target position T, as expressed in the equation p=C–T.
Between the perception of target and cursor and the construction of the signal representing the distance between them there is a delay of Τ milliseconds, so that the working perceptual signal at time t represents the target-to-cursor distance at a prior time, t – Τ. Consequently, the equation used in the model is
1. p(t) = C(t–Τ) – T(t–Τ)
The negative feedback control system receives a reference signal r which specifies the magnitude of the given perceptual signal which is currently intended or desired. (For the origin of r within the organism, see under "A hierarchy of control", below.) Both r and p are input to a simple neural structure with r excitatory and p inhibitory. This structure is called a "comparator". The effect is to subtract p from r, yielding an error signal e that indicates the magnitude and sign of the difference between the desired magnitude r and the currently input magnitude p of the given perception. The equation representing this in the model is:
2. e = r–p
The error signal e must be transformed to the output quantity qo (representing the participant's muscular efforts affecting the mouse position). Experiments have shown that in the best model for the output function, the mouse velocity Vcursor is proportional to the error signal e by a gain factor G (that is, Vcursor = G*e). Thus, when the perceptual signal p is smaller than the reference signal r, the error signal e has a positive sign, and from it the model computes an upward velocity of the cursor that is proportional to the error.
The next position of the cursor Cnew is the current position Cold plus the velocity Vcursor times the duration dt of one iteration of the program. By simple algebra, we substitute G*e (as given above) for Vcursor, yielding a third equation:
3. Cnew = Cold + G*e*dt
These three simple equations or program steps constitute the simplest form of the model for the tracking task. When these three simultaneous equations are evaluated over and over with the same random disturbances d of the target position that the human participant experienced, the output positions and velocities of the cursor duplicate the participant's actions in the tracking task above within 4.0% of their peak-to-peak range, in great detail.
This simple model can be refined with a damping factor d which reduces the discrepancy between the model and the human participant to 3.6% when the disturbance d is set to maximum difficulty.
3'. Cnew = Cold + [(G*e)–(d*Cold)]*dt
Detailed discussion of this model in (Powers 2008) includes both source and executable code, with which the reader can verify how well this simple program simulates real behavior. No consideration is needed of possible nonlinearities such as the Weber-Fechner law, potential noise in the system, continuously varying angles at the joints, and many other factors that could afflict performance if this were a simple linear model. No inverse kinematics or predictive calculations are required. The model simply reduces the discrepancy between input p and reference r continuously as it arises in real time, and that is all that is required—as predicted by the theory.
PCT as a solution to GOFAI complexityEdit
Good Old Fashioned Artificial Intelligence (GOFAI) is an early (1960s) approach to building intelligent systems. GOFAI assumes that AI programs are more complex versions of existing software designs, and that they can be built by extending existing procedural approaches and standard software methodologies. Objections to GOFAI can be separated into those based on program semantics, and those based on algorithmic complexity.
Implicit in the theoretical approach to GOFAI is the belief that minds (animal and human) are essentially syntactic, that is, made from symbols combined into expressions, a stance known as the Physical Symbol Systems Hypothesis. In recent years, this belief has been challenged by those who believe that minds are essentially semantic, i.e. they process embodied and situated meaning, not abstract data symbols. The 'poster child' of this oppositional stance is a thought experiment (Gedankenexperiment) called Searle's Chinese Room (SCR), named after its inventor, philosopher John Searle. PCT is inherently situated, embodied, and dynamic.
Practical attempts to building GOFAI soon run into asymptotic increases in algorithmic complexity, so-called 'complexity explosions'. They arise from the control paradigm assumed by GOFAI practitioners, which views governance (= feedforward command + feedback control) as modeling. In other words, for a GOFAI system to 'reason' about the world, it must first build an internal symbolic model of that world upon which it can apply sequences of symbolic manipulations in accord with the principles of Turing and Von Neumann abstract machines. Even small changes in the world must be updated by the GOFAI program, in case they might be critical to its logical output. But any world model which contains sufficient detail to ensure correct GOFAI output is inevitably too complex, with too many moving parts to be practical, so that by the time world changes are updated, the world has already moved on.
PCT avoids the complexity issues of GOFAI by basing its real-time governance on a perceptual window (dimensionally reduced projection) of the world, which is much more tractable than the entire world. Uexküll,, Dyer, and Powers arrived independently at this insight, which is important for robotics.
Distinctions from engineering control theoryEdit
In the artificial systems that are specified by engineering control theory, the reference signal is considered to be an external input to the 'plant'. In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in the methodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in the next section below, evokes the reference signal r from synapse-local memory, and the strength of r is proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems.
In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on the mathematics of PCT (and elaborated more fully in the referenced literature). The weighted sum is understood to result from reorganization.
Engineering control theory is computationally demanding, but as the preceding section shows, PCT is not. For example, contrast the implementation of a model of an inverted pendulum in engineering control theory with the PCT implementation as a hierarchy of five simple control systems.
A hierarchy of controlEdit
Perceptions, in PCT, are constructed and controlled in a hierarchy of levels. For example, visual perception of an object is constructed from differences in light intensity or differences in sensations such as color at its edges. Controlling the shape or location of the object requires altering the perceptions of sensations or intensities (which are controlled by lower-level systems). This organizing principle is applied at all levels, up to the most abstract philosophical and theoretical constructs.
The Russian physiologist Nicolas Bernstein independently came to the same conclusion that behavior has to be multiordinal—organized hierarchically, in layers. A simple problem led to this conclusion at about the same time both in PCT and in Bernstein's work. The spinal reflexes act to stabilize limbs against disturbances. Why do they not prevent centers higher in the brain from using those limbs to carry out behavior? Since the brain obviously does use the spinal systems in producing behavior, there must be a principle that allows the higher systems to operate by incorporating the reflexes, not just by overcoming them or turning them off. The answer is that the reference value (setpoint) for a spinal reflex is not static; rather, it is varied by higher-level systems as their means of moving the limbs. This principle applies to higher feedback loops, as each loop presents the same problem to subsystems above it.
Whereas an engineered control system has a reference value or setpoint adjusted by some external agency, the reference value for a biological control system cannot be set in this way. The setpoint must come from some internal process. If there is a way for behavior to affect it, any perception may be brought to the state momentarily specified by higher levels and then be maintained in that state against unpredictable disturbances. In a hierarchy of control systems, higher levels adjust the goals of lower levels as their means of approaching their own goals set by still-higher systems. This has important consequences for any proposed external control of an autonomous living control system (organism). At the highest level, reference values (goals) are set by heredity or adaptive processes.
Reorganization in evolution, development, and learningEdit
If an organism controls inappropriate perceptions, or if it controls some perceptions to inappropriate values, then it is less likely to bring progeny to maturity, and may die. Consequently, by natural selection successive generations of organisms evolve so that they control those perceptions that, when controlled with appropriate setpoints, tend to maintain critical internal variables at optimal levels, or at least within non-lethal limits. Powers called these critical internal variables "intrinsic variables" (Ashby's "essential variables").
The mechanism that influences the development of structures of perceptions to be controlled is termed "reorganization", a process within the individual organism that is subject to natural selection just as is the evolved structure of individuals within a species.
This "reorganization system" is proposed to be part of the inherited structure of the organism. It changes the underlying parameters and connectivity of the control hierarchy in a random-walk manner. There is a basic continuous rate of change in intrinsic variables which proceeds at a speed set by the total error (and stops at zero error), punctuated by random changes in direction in a hyperspace with as many dimensions as there are critical variables. This is a more or less direct adaptation of Ashby's "homeostat", first adopted into PCT in the 1960 paper and then changed to use E. coli's method of navigating up gradients of nutrients, as described by Koshland (1980).
Reorganization may occur at any level when loss of control at that level causes intrinsic (essential) variables to deviate from genetically determined set points. This is the basic mechanism that is involved in trial-and-error learning, which leads to the acquisition of more systematic kinds of learning processes.
In a hierarchy of interacting control systems, different systems at one level can send conflicting goals to one lower system. When two systems are specifying different goals for the same lower-level variable, they are in conflict. Protracted conflict is experienced by human beings as many forms of psychological distress such as anxiety, obsession, depression, confusion, and vacillation. Severe conflict prevents the affected systems from being able to control, effectively destroying their function for the organism.
Higher level control systems often are able to use known strategies (which are themselves acquired through prior reorganizations) to seek perceptions that don't produce the conflict. Normally, this takes place without notice. If the conflict persists and systematic "problem solving" by higher systems fails, the reorganization system may modify existing systems until they bypass the conflict or until they produce new reference signals (goals) that are not in conflict at lower levels.
When reorganization results in an arrangement that reduces or eliminates the error that is driving it, the process of reorganization slows or stops with the new organization in place. (This replaces the concept of reinforcement learning.) New means of controlling the perceptions involved, and indeed new perceptual constructs subject to control, may also result from reorganization. In simplest terms, the reorganization process varies things until something works, at which point we say that the organism has learned. When simulations of this method are done in the right way, they demonstrate that this method is surprisingly efficient.
Psychotherapy: the method of levels (MOL)Edit
The reorganization concept has led to a method of psychotherapy called the method of levels (MOL). Using MOL, the therapist aims to help the patient shift his or her awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place.
Currently, no one theory has been agreed upon to explain the synaptic, neuronal or systemic basis of learning. Prominent since 1973, however, is the idea that long-term potentiation (LTP) of populations of synapses induces learning through both pre- and postsynaptic mechanisms (Bliss & Lømo, 1973; Bliss & Gardner-Medwin, 1973). LTP is a form of Hebbian learning, which proposed that high-frequency, tonic activation of a circuit of neurones increases the efficacy with which they are activated and the size of their response to a given stimulus as compared to the standard neurone (Hebb, 1949). These mechanisms are the principles behind Hebb's famously simple explanation: "Those that fire together, wire together" (Hebb, 1949).
LTP has received much support since it was first observed by Terje Lømo in 1966 and is still the subject of many modern studies and clinical research. However, there are possible alternative mechanisms underlying LTP, as presented by Enoki, Hu, Hamilton and Fine in 2009, published in the journal Neuron. They concede that LTP is the basis of learning. However, they firstly propose that LTP occurs in individual synapses, and this plasticity is graded (as opposed to in a binary mode) and bidirectional (Enoki et al., 2009). Secondly, the group suggest that the synaptic changes are expressed solely presynaptically, via changes in the probability of transmitter release (Enoki et al., 2009). Finally, the team predict that the occurrence of LTP could be age-dependent, as the plasticity of a neonatal brain would be higher than that of a mature one. Therefore, the theories differ, as one proposes an on/off occurrence of LTP by pre- and postsynaptic mechanisms and the other proposes only presynaptic changes, graded ability, and age-dependence.
These theories do agree on one element of LTP, namely, that it must occur through physical changes to the synaptic membrane/s, i.e. synaptic plasticity. Perceptual control theory encompasses both of these views. It proposes the mechanism of 'reorganisation' as the basis of learning. Reorganisation occurs within the inherent control system of a human or animal by restructuring the inter- and intraconnections of its hierarchical organisation, akin to the neuroscientific phenomenon of neural plasticity. This reorganisation initially allows the trial-and-error form of learning, which is seen in babies, and then progresses to more structured learning through association, apparent in infants, and finally to systematic learning, covering the adult ability to learn from both internally and externally generated stimuli and events. In this way, PCT provides a valid model for learning that combines the biological mechanisms of LTP with an explanation of the progression and change of mechanisms associated with developmental ability (Plooij 1984, 1987, 2003, Plooij & Plooij (1990), 2013).
Powers (2008) produced a simulation of arm co-ordination. He suggested that in order to move your arm, fourteen control systems that control fourteen joint angles are involved, and they reorganise simultaneously and independently. It was found that for optimum performance, the output functions must be organised in a way so as each control system's output only affects the one environmental variable it is perceiving. In this simulation, the reorganising process is working as it should, and just as Powers suggests that it works in humans, reducing outputs that cause error and increasing those that reduce error. Initially, the disturbances have large effects on the angles of the joints, but over time the joint angles match the reference signals more closely due to the system being reorganised. Powers (2008) suggests that in order to achieve coordination of joint angles to produce desired movements, instead of calculating how multiple joint angles must change to produce this movement the brain uses negative feedback systems to generate the joint angles that are required. A single reference signal that is varied in a higher-order system can generate a movement that requires several joint angles to change at the same time.
Botvinick (2008) proposed that one of the founding insights of the cognitive revolution was the recognition of hierarchical structure in human behavior. Despite decades of research, however, the computational mechanisms underlying hierarchically organized behavior are still not fully understood. Bedre, Hoffman, Cooney & D'Esposito (2009) propose that the fundamental goal in cognitive neuroscience is to characterize the functional organization of the frontal cortex that supports the control of action.
Recent neuroimaging data has supported the hypothesis that the frontal lobes are organized hierarchically, such that control is supported in progressively caudal regions as control moves to more concrete specification of action. However, it is still not clear whether lower-order control processors are differentially affected by impairments in higher-order control when between-level interactions are required to complete a task, or whether there are feedback influences of lower-level on higher-level control (Bedre, Hoffman, Cooney & D'Esposito 2009).
Botvinik (2008) found that all existing models of hierarchically structured behavior share at least one general assumption – that the hierarchical, part–whole organization of human action is mirrored in the internal or neural representations underlying it. Specifically, the assumption is that there exist representations not only of low-level motor behaviors, but also separable representations of higher-level behavioral units. The latest crop of models provides new insights, but also poses new or refined questions for empirical research, including how abstract action representations emerge through learning, how they interact with different modes of action control, and how they sort out within the prefrontal cortex (PFC).
Perceptual control theory (PCT) can provide an explanatory model of neural organisation that deals with the current issues. PCT describes the hierarchical character of behavior as being determined by control of hierarchically organized perception. Control systems in the body and in the internal environment of billions of interconnected neurons within the brain are responsible for keeping perceptual signals within survivable limits in the unpredictably variable environment from which those perceptions are derived. PCT does not propose that there is an internal model within which the brain simulates behavior before issuing commands to execute that behavior. Instead, one of its characteristic features is the principled lack of cerebral organisation of behavior. Rather, behavior is the organism's variable means to reduce the discrepancy between perceptions and reference values which are based on various external and internal inputs (Cools, 1985). Behavior must constantly adapt and change for an organism to maintain its perceptual goals. In this way, PCT can provide an explanation of abstract learning through spontaneous reorganisation of the hierarchy. PCT proposes that conflict occurs between disparate reference values for a given perception rather than between different responses (Mansell 2011), and that learning is implemented as trial-and-error changes of the properties of control systems (Marken & Powers 1989), rather than any specific response being reinforced. In this way, behavior remains adaptive to the environment as it unfolds, rather than relying on learned action patterns that may not fit.
Hierarchies of perceptual control have been simulated in computer models and have been shown to provide a close match to behavioral data. For example, Marken conducted an experiment comparing the behavior of a perceptual control hierarchy computer model with that of six healthy volunteers in three experiments. The participants were required to keep the distance between a left line and a centre line equal to that of the centre line and a right line. They were also instructed to keep both distances equal to 2 cm. They had 2 paddles in their hands, one controlling the left line and one controlling the middle line. To do this, they had to resist random disturbances applied to the positions of the lines. As the participants achieved control, they managed to nullify the expected effect of the disturbances by moving their paddles. The correlation between the behavior of subjects and the model in all the experiments approached .99. It is proposed that the organization of models of hierarchical control systems such as this informs us about the organization of the human subjects whose behavior it so closely reproduces.
PCT has significant implications for Robotics and Artificial Intelligence. The architecture, of a hierarchy of perceptual controllers, is an ideal and comparatively simple implementation for artificial systems by avoiding the need to generate specific actions whether by complex models of the external world or the computation from input-output mappings. This is in stark contrast to traditional methodologies, such as the computational approach and Behavior-based Robotics.
The application of perceptual control to Robotics was outlined in a seminal paper  in the Artificial Life journal by Rupert Young of Perceptual Robots in 2017. The architecture has been applied to a number of real-world robotic systems including robotic rovers, balancing robot and robot arms.
Traditional approaches to robotics, which generally depend upon the computation of actions in specific situations, result in inflexible, clumsy robots unable to cope with the dynamic nature of the world. PCT robots, on the other hand, demonstrate robots that inherently resist and counter the chaotic, unpredictable world.
Current situation and prospectsEdit
The preceding explanation of PCT principles provides justification of how this theory can provide a valid explanation of neural organisation and how it can explain some of the current issues of conceptual models.
Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by adjusting the reference levels (goals) of the lower levels, in effect telling the lower levels what to perceive.
While many computer demonstrations of principles have been developed, the proposed higher levels are difficult to model because too little is known about how the brain works at these levels. Isolated higher-level control processes can be investigated, but models of an extensive hierarchy of control are still only conceptual, or at best rudimentary.
Perceptual control theory has not been widely accepted in mainstream psychology, but has been effectively used in a considerable range of domains in human factors, clinical psychology, and psychotherapy (the "Method of Levels"), it is the basis for a considerable body of research in sociology, and it has formed the conceptual foundation for the reference model used by a succession of NATO research study groups. It is being taught in several universities worldwide and is the subject of a number of PhD dissertations.
- Cziko, Gary. (1995). Without miracles: Universal selection theory and the second Darwinian revolution. Cambridge, MA: MIT Press (A Bradford Book). ISBN 0-262-53147-X (Online)
- Cziko, Gary. (2000). The things we do: Using the lessons of Bernard and Darwin to understand the what, how, and why of our behavior. Cambridge, MA: MIT Press (A Bradford Book). ISBN 0-262-03277-5 (Online)
- Marken, Richard S. (1992) Mind readings: Experimental studies of purpose. Benchmark Publications: New Canaan, CT.
- Marken, Richard S. (2002) More mind readings: Methods and models in the study of purpose. Chapel Hill, NC: New View. ISBN 0-944337-43-0
- Plooij, F. X. (1984). The behavioral development of free-living chimpanzee babies and infants. Norwood, N.J.: Ablex.
- Plooij, F. X. (2003). "The trilogy of mind". In M. Heimann (Ed.), Regression periods in human infancy (pp. 185–205). Mahwah, NJ: Erlbaum.
- Powers, William T. (1973). Behavior: The control of perception. Chicago: Aldine de Gruyter. ISBN 0-202-25113-6. [2nd exp. ed. = Powers (2005)].
- Powers, William T. (1989). Living control systems. [Selected papers 1960–1988.] New Canaan, CT: Benchmark Publications. ISBN 0-9647121-3-X.
- Powers, William T. (1992). Living control systems II. [Selected papers 1959–1990.] New Canaan, CT: Benchmark Publications.
- Powers, William T. (1998). Making sense of behavior: The meaning of control. New Canaan, CT: Benchmark Publications. ISBN 0-9647121-5-6.
- Powers, William T. (2005). Behavior: The control of perception. New Canaan: Benchmark Publications. ISBN 0-9647121-7-2. [2nd exp. ed. of Powers (1973). Chinese tr. (2004) Guongdong Higher Learning Education Press, Guangzhou, China. ISBN 7-5361-2996-3.]
- Powers, William T. (2008). Living Control Systems III: The fact of control. [Mathematical appendix by Dr. Richard Kennaway. Includes computer programs for the reader to demonstrate and experimentally test the theory.] New Canaan, CT: Benchmark Publications. ISBN 978-0-9647121-8-8.
- Powers, William. T., Clark, R. K., and McFarland, R. L. (1960). "A general feedback theory of human behavior [Part 1; Part 2]. Perceptual and Motor Skills 11, 71-88; 309–323.
- Powers, William T. and Runkel, Philip J. 2011. Dialogue concerning the two chief approaches to a science of life: Word pictures and correlations versus working models. Hayward, CA: Living Control Systems Publishing. ISBN 0-9740155-1-2.
- Robertson, R.J. & Powers, W.T. (1990). Introduction to modern psychology: the control-theory view. Gravel Switch, KY: Control Systems Group.
- Robertson, R. J., Goldstein, D.M., Mermel, M., & Musgrave, M. (1999). Testing the self as a control system: Theoretical and methodological issues. Int. J. Human-Computer Studies, 50, 571-580.
- Runkel, Philip J[ulian]. 1990. Casting Nets and Testing Specimens: Two Grand Methods of Psychology. New York: Praeger. ISBN 0-275-93533-7. [Repr. 2007, Hayward, CA: Living Control Systems Publishing. ISBN 0-9740155-7-1.]
- Runkel, Philip J[ulian]. (2003). People as living things. Hayward, CA: Living Control Systems Publishing. ISBN 0-9740155-0-4
- Young, Rupert. (2017). A General Architecture for Robotics Systems: A Perception-Based Approach to Artificial Life. Artificial Life 23:2, pp236–286.
- McClelland, Kent (1994). "Perceptual Control and Social Power". Sociological Perspectives. 37 (4): 461–496. doi:10.2307/1389276. JSTOR 1389276.
- McClelland, Kent (2004). "The Collective Control of Perceptions: Constructing Order from Conflict". International Journal of Human-Computer Studies. 60: 65–99. doi:10.1016/j.ijhcs.2003.08.003.
- McClelland, Kent and Thomas J. Fararo, eds. (2006). Purpose, Meaning, and Action: Control Systems Theories in Sociology. New York: Palgrave Macmillan.
- McPhail, Clark. 1991. The Myth of the Madding Crowd. New York: Aldine de Gruyter.
- For example in this collection.
- Runkel, Philip J. (1990). Casting nets and testing specimens: Two grand methods of psychology. New York: Praeger. p. 103. ISBN 978-0-275-93533-7.
- "The behaviorist asks: Why don't we make what we can observe the real field of psychology? Let us limit ourselves to things that can be observed, and formulate laws concerning only those things. Now what can we observe? We can observe behavior—what the organism does or says." Watson, J.B. (1924). Behaviorism. New York: People's Institute Publishing Company.
- Marken, Richard S. (June 2009). "You say you had a revolution: Methodological foundations of closed-loop psychology". Review of General Psychology. 13 (2): 137–145. doi:10.1037/a0015106.
- Runkel, Philip J. (2003). People as living things. Hayward, CA: Living Control Systems Publishing. ISBN 978-0-9740155-0-7.
- Cziko, Gary (2000), The things we do: Using the lessons of Bernard and Darwin to understand the what, how, and why of our behavior, Cambridge, MA: MIT Press, p. 9, ISBN 978-0-262-03277-3
- Astrom, Karl J.; Murray, Richard M. (2008). Feedback Systems: An Introduction for Scientists and Engineers (PDF). Princeton University Press. ISBN 978-0-691-13576-2.
- For additional information about the history of PCT, see interviews with William T. Powers in the "Audio" section under "External links".
- Powers, William T.; Clark, R.K.; McFarland, R.L. (1960). "A general feedback theory of human behavior (Part I)". Perceptual and Motor Skills. 11 (1): 71–88. doi:10.2466/pms.1918.104.22.168. and Powers, William T.; Clark, R.K.; McFarland, R.L. (1960). "A general feedback theory of human behavior (Part II)". Perceptual and Motor Skills. 11 (3): 309–323. doi:10.2466/pms.1922.214.171.1249. [Reprinted in Bertalanffy, Ludwig von; Rapoport, Anatol (1960), General Systems: Yearbook of the Society for General Systems Research, 5, Ann Arbor, Michigan: Society for General Systems Research, pages 63-73, 75-83. Partial reprint in Smith, A. G. (1966). Communication and Culture. New York: Holt, Rinehart, and Winston.]
- Cybernetics: Or Control and Communication in the Animal and the Machine. Paris: Hermann & Cie. 1948. 2nd revised ed. 1961, MIT Press, Cambridge, MA. ISBN 978-0-262-73009-9.
- Ashby, W[illiam] Ross (1952). Design for a Brain. London: Chapman & Hall.
- Archives of the Control Systems Group (CSG)
- Mansell, Warren (2011). "Control of perception should be operationalized as a fundamental property of the nervous system". Topics in Cognitive Science. 3 (2): 257–261. doi:10.1111/j.1756-8765.2011.01140.x.
- Mansell, Warren; Carey, Timothy A. (28 November 2015). "A perceptual control revolution?". The Psychologist. The British Psychological Society. Retrieved 17 July 2016.
- Harold Black and the Negative-Feedback Amplifier, Ronald Kline, IEEE Control Systems Magazine, Aug 1993, Volume 13, Issue 4, Pages 82-85
- Bennett, Stuart (June 1996). "A brief history of automatic control" (PDF). IEEE Control Systems Magazine. 16 (3): 17–25. doi:10.1109/37.506394. Retrieved 18 July 2016.
- Miller, George; Galanter, Eugene; Pribram, Karl (1960). Plans and the structure of behavior. New York: Holt, Rinehart and Winston. ISBN 978-0-03-010075-8.
- Runkel, Philip J. (2003). People as living things. Hayward, CA: Living Control Systems Publishing. pp. 77–79. ISBN 978-0-9740155-0-7.
- Marken, Richard S. (2001). "Controlled variables: psychology as the center fielder views it". American Journal of Psychology. 114 (2): 259–281. CiteSeerX 10.1.1.554.9588. doi:10.2307/1423517. JSTOR 1423517.
- See e.g. works listed here.
- See Runkel 1990 on the limitations of statistical methods and the value of individual performance data.
- Powers, William T. (2008). Living Control Systems III: The fact of control. New Canaan, CT: Benchmark Publications. ISBN 978-0-9647121-8-8. [Mathematical appendix by Dr. Richard Kennaway. Includes computer programs for the reader to demonstrate and experimentally test the theory.]
- Powers, William T. (1973). Behavior: The Control of Perception. ISBN 978-0-7045-0092-1.
- Yin, Henry H. (18 November 2014). "How Basal Ganglia Outputs Generate Behavior". Advances in Neuroscience. 2014 (768313): 1–28. doi:10.1155/2014/768313.
- Marken, Richard S.; William T., Powers (1989), "Levels of intention in behavior", in Hershberger, Wayne (ed.), Volitional Action, Advances in psychology, 62, Amsterdam: Elsevier B.V., pp. 409–430, ISBN 978-0-444-88318-6
- Documented e.g. at Miranda, José Luis Corona. 2009. Miranda, José Luis Corona. 2009. “Application of Kalman Filtering and PID Control for Direct Inverted Pendulum Control”. M.A. Thesis, Chico State University, Chico, CA.
- Documented at Powers, William T. & Richard Kennaway. (Edited by Dag Forssell.) 2004. “Inverted Pendulum”. Hayward, CA: Living Control Systems., with downloadable source and executable code. A more detailed exposition of the differences between PCT and engineering control theory, with computer demonstrations and source code, is available at http://www.livingcontrolsystems.com/demos/multiple_control/multiple_control.zip. This is one of many computer demonstrations that are available, with source code, at www.livingcontrolsystems.com/demos/tutor_pct.html.
- Bernstein, Nicolas. 1967. Coordination and regulation of movements. New York: Pergamon Press.
- For an introduction, see the Byte articles on robotics and the article on the origins of purpose in this collection.
- Koshland, Daniel. (1980). Bacterial chemotaxis as a model behavioral system. New York: Raven Press.
- Cziko, Gary (1995). Without Miracles. ISBN 978-0-262-03232-2..
- Mansell, Warren; Carey, Timothy A; Tai, Sara (2012). A transdiagnostic approach to CBT using method of levels therapy: distinctive features. The CBT distinctive features series. Milton Park, Abingdon, Oxon; New York: Routledge. doi:10.4324/9780203081334. ISBN 9780415507639. OCLC 774499959.
- Hebb, Donald (1949). The organization of behavior: A neuropsychological theory. New York: Wiley & Sons.
- Plooij, Frans X. (1984). The behavioral development of free-living chimpanzee babies and infants. Norwood, N.J.: Ablex.
- van de Rijt-Plooij, Hetty; Plooij, Frans (1987). "Growing independence, conflict and learning in mother-infant relations in free-ranging chimpanzees". Behaviour. 101 (1–3): 1–86. doi:10.1163/156853987x00378.
- Plooij, Frans X. (2003), Heimann, M. (ed.), The trilogy of mind, Regression periods in human infancy, Mahwah, New Jersey: Erlbaum, pp. 185–205
- Plooij, Frans X.; van de Rijt-Plooij, Hetty (1990). "Developmental transitions as successive reorganizations of a control hierarchy". American Behavioral Scientist. 34: 67–80. doi:10.1177/0002764290034001007.
- van de Rijt-Plooij, Hetty; Plooij, Frans (October 22, 2013). The Wonder Weeks: How to Stimulate Your Baby's Mental Development and Help Him Turn His 10 Predictable, Great, Fussy Phases into Magical Leaps Forward. Arnhem, Netherlands: Kiddy World Publishing. p. 480. ISBN 978-9491882005.
- Marken, Richard S. (Aug 1986). "Perceptual organization of behavior: A hierarchical control model of coordinated action". Journal of Experimental Psychology: Human Perception and Performance. 12 (3): 267–276. doi:10.1037/0096-15126.96.36.1997. PMID 2943855.
- Young, Rupert (Jun 2017). "A General Architecture for Robotics Systems: A Perception-Based Approach to Artificial Life". Artificial Life. 23 (2): 236–286. doi:10.1162/ARTL_a_00229. PMID 28513206.
- Young, Rupert (Feb 27, 2015). HPCT Autonomous Rover (short version) (YouTube). Perceptual Robots.
- Young, Rupert (Mar 9, 2016). Robot on a train (YouTube). Perceptual Robots.
- Young, Rupert (Jul 1, 2016). Dynamic Visual Robot Arm Control (YouTube). Perceptual Robots.
- The June 1999 Issue of The International Journal of Human-Computer Studies contained papers ranging from tracking through cockpit layout to self-image and crowd dynamics.
- PCT lies at the foundation of Component-Based Usability Testing.
- For example: McClelland, Kent A. and Thomas J. Fararo, eds. 2006, Purpose, Meaning and Action: Control Systems Theories in Sociology, New York: Palgrave Macmillan. (McClelland is co-author of Chapter 1, "Control Systems Thinking in Sociological Theory," and author of Chapter 2, "Understanding Collective Control Processes."). McClelland, Kent, 2004, "Collective Control of Perception: Constructing Order from Conflict", International Journal of Human-Computer Studies 60:65-99. McPhail, Clark. 1991, The myth of the madding crowd New York: Aldine de Gruyter.
- volume-28november-2015 Reports of these groups are available from the NATO Research and Technology Administration publications page <"Archived copy". Archived from the original on 2010-06-23. Retrieved 2010-05-15. Cite uses deprecated parameter
|deadurl=(help)CS1 maint: archived copy as title (link)> under the titles RTO-TR-030, RTO-TR-IST-021, and RTO-TR-IST-059.
- Forssell, Dag (Editor) (May 2016). Perceptual Control Theory, An Overview of the Third Grand Theory in Psychology: Introductions, Readings, and Resources. Hayward, CA: Living Control Systems Publishing. p. 408. ISBN 978-1-938090-12-7.CS1 maint: extra text: authors list (link)
- PCT for the Beginner by William T. Powers (2007)
- The Dispute Over Control theory by William T. Powers (1993) – requires access approval
- Demonstrations of perceptual control by Gary Cziko (2006)
- The ‘Natural Selection’ of Robotics by Rupert Young. (2017).
- Interview with William T. Powers on origin and history of PCT (Part One – 20060722 (58.7M)
- Interview with William T. Powers on origin and history of PCT (Part Two – 20070728 (57.7M)
- The International Association for Perceptual Control Systems – The IAPCT website.
- PCTWeb – Warren Mansell's comprehensive website on PCT.
- Living Control Systems Publishing – resources and books about PCT.
- Mind Readings – Rick Marken's website on PCT, with many interactive demonstrations.
- Method of Levels – Timothy Carey's website on the Method of Levels.
- Perceptual Robots – The PCT methodology and architecture applied to robotics.
- ResearchGate Project – Recent research products.