Open main menu

Wikipedia β

Primary rewards[1]
Girl drinking water
Couple kissing
Selection of foods
Mother and newborn infant
Parental care
Addiction and dependence glossary[2][3][4][5]
addiction – a brain disorder characterized by compulsive engagement in rewarding stimuli despite adverse consequences
addictive behavior – a behavior that is both rewarding and reinforcing
addictive drug – a drug that is both rewarding and reinforcing
dependence – an adaptive state associated with a withdrawal syndrome upon cessation of repeated exposure to a stimulus (e.g., drug intake)
drug sensitization or reverse tolerance – the escalating effect of a drug resulting from repeated administration at a given dose
drug withdrawal – symptoms that occur upon cessation of repeated drug use
physical dependence – dependence that involves persistent physical–somatic withdrawal symptoms (e.g., fatigue and delirium tremens)
psychological dependence – dependence that involves emotional–motivational withdrawal symptoms (e.g., dysphoria and anhedonia)
reinforcing stimuli – stimuli that increase the probability of repeating behaviors paired with them
rewarding stimuli – stimuli that the brain interprets as intrinsically positive and desirable or as something to approach
sensitization – an amplified response to a stimulus resulting from repeated exposure to it
substance use disorder – a condition in which the use of substances leads to clinically and functionally significant impairment or distress
tolerance – the diminishing effect of a drug resulting from repeated administration at a given dose

The reward system is a group of neural structures responsible for incentive salience (i.e., motivation and "wanting", desire, or craving for a reward), associative learning (primarily positive reinforcement and classical conditioning), and positive emotions, particularly ones which involve pleasure as a core component (e.g., joy, euphoria and ecstasy).[1][6] Reward is the attractive and motivational property of a stimulus that induces appetitive behavior – also known as approach behavior – and consummatory behavior.[1] In its description of a rewarding stimulus (i.e., "a reward"), a review on reward neuroscience noted, "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward."[1] In operant conditioning, rewarding stimuli function as positive reinforcers;[1] however, the converse statement also holds true: positive reinforcers are rewarding.[1]

Primary rewards are a class of rewarding stimuli which facilitate the survival of one's self and offspring, and include homeostatic (e.g., palatable food) and reproductive (e.g., sexual contact and parental investment) rewards.[1][7] Intrinsic rewards are unconditioned rewards that are attractive and motivate behavior because they are inherently pleasurable.[1] Extrinsic rewards (e.g., money) are conditioned rewards that are attractive and motivate behavior, but are not inherently pleasurable.[1] Extrinsic rewards derive their motivational value as a result of a learned association (i.e., conditioning) with intrinsic rewards.[1] Extrinsic rewards may also elicit pleasure (e.g., from winning a lot of money in a lottery) after being classically conditioned with intrinsic rewards.[1]

Survival for most animal species depends upon maximizing contact with beneficial stimuli and minimizing contact with harmful stimuli. Reward cognition serves to increase the likelihood of survival and reproduction by causing associative learning, eliciting approach and consummatory behavior, and triggering positive emotions.[1] Thus, reward is a mechanism that evolved to help increase the adaptive fitness of animals.[8]



In neuroscience, the reward system is a collection of brain structures and neural pathways that are responsible for reward-related cognition, including associative learning (primarily classical conditioning and operant reinforcement), incentive salience (i.e., motivation and "wanting", desire, or craving for a reward), and positive emotions, particularly emotions that involve pleasure (i.e., hedonic "liking").[1][6]

Terms that are commonly used to describe behavior related to the "wanting" or desire component of reward include appetitive behavior, preparatory behavior, instrumental behavior, anticipatory behavior, and seeking.[9] Terms that are commonly used to describe behavior related to the "liking" or pleasure component of reward include consummatory behavior and taking behavior.[9]

The three primary functions of rewards are their capacity to:

  1. produce associative learning (i.e., classical conditioning and operant reinforcement);[1]
  2. affect decision-making and induce approach behavior (via the assignment of motivational salience to rewarding stimuli);[1]
  3. elicit positive emotions, particularly pleasure.[1]


The brain structures that compose the reward system are located primarily within the cortico-basal ganglia-thalamo-cortical loop;[10] the basal ganglia portion of the loop drives activity within the reward system.[10] Most of the pathways that connect structures within the reward system are glutamatergic interneurons, GABAergic medium spiny neurons, and dopaminergic projection neurons,[10][11] although other types of projection neurons contribute (e.g., orexinergic projection neurons). The reward system includes the ventral tegmental area, ventral striatum (i.e., the nucleus accumbens and olfactory tubercle), dorsal striatum (i.e., the caudate nucleus and putamen), substantia nigra (i.e., the pars compacta and pars reticulata), prefrontal cortex, anterior cingulate cortex, insular cortex, hippocampus, hypothalamus (particularly, the orexinergic nucleus in the lateral hypothalamus), thalamus (multiple nuclei), subthalamic nucleus, globus pallidus (both external and internal), ventral pallidum, parabrachial nucleus, amygdala, and the remainder of the extended amygdala.[6][10][12][13][14] The dorsal raphe nucleus and cerebellum appear to modulate some forms of reward-related cognition (i.e., associative learning, motivational salience, and positive emotions) and behaviors as well.[15][16][17]

Most of the dopamine pathways (i.e., neurons that use the neurotransmitter dopamine to communicate with other neurons) that project out of the ventral tegmental area are part of the reward system;[10] in these pathways, dopamine acts on D1-like receptors or D2-like receptors to either stimulate (D1-like) or inhibit (D2-like) the production of cAMP.[18] The GABAergic medium spiny neurons of the striatum are components of the reward system as well.[10] The glutamatergic projection nuclei in the subthalamic nucleus, prefrontal cortex, hippocampus, thalamus, and amygdala connect to other parts of the reward system via glutamate pathways.[10] The medial forebrain bundle, which is a set of many neural pathways that mediate brain stimulation reward (i.e., reward derived from direct electrochemical stimulation of the lateral hypothalamus), is also a component of the reward system.[19]

After nearly 50 years of research on brain-stimulation reward, experts have certified that dozens of sites in the brain will maintain intracranial self-stimulation. Regions include the lateral hypothalamus and medial forebrain bundles, which are especially effective. Stimulation there activates fibers that form the ascending pathways; the ascending pathways include the mesolimbic dopamine pathway, which projects from the ventral tegmental area to the nucleus accumbens. There are several explanations as to why the mesolimbic dopamine pathway is central to circuits mediating reward. First, there is a marked increase in dopamine release from the mesolimbic pathway when animals engage in intracranial self-stimulation.[8] Second, experiments consistently indicate that brain-stimulation reward stimulates the reinforcement of pathways that are normally activated by natural rewards, and drug reward or intracranial self-stimulation can exert more powerful activation of central reward mechanisms because they activate the reward center directly rather than through the peripheral nerves.[8][20][21] Third, when animals are administered addictive drugs or engage in naturally rewarding behaviors, such as feeding or sexual activity, there is a marked release of dopamine within the nucleus accumbens.[8] However, dopamine is not the only reward compound in the brain.

Pleasure centers Edit

Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned).[1] Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards.[1] Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward.[1] In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.[1]

The reward system contains pleasure centers or hedonic hotspots – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. As of May 2015, hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, and parabrachial nucleus of the pons;[6][14] the insular cortex and orbitofrontal cortex likely contain hedonic hotspots as well.[6] Opioid and endocannabinoid, but not dopamine, injections in the ventrorostral region of the nucleus accumbens are able to increase liking, while injection in other regions may produce aversion or wanting, as dopamine microinjections do.[22]

The simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria.[23]


Incentive salience is the "wanting" or "desire" attribute, which includes a motivational component, that is assigned to a rewarding stimulus by the nucleus accumbens shell (NAcc shell).[1][24][25] The degree of dopamine neurotransmission into the NAcc shell from the mesolimbic pathway is highly correlated with the magnitude of incentive salience for rewarding stimuli.[24]

Activation of the dorsorostral region of the nucleus accumbens correlates with increases in wanting without concurrent increases in liking.[22] However, the dopaminergic neurotransmission into the nucleus accumbens shell is not only responsible for appetitive motivational salience (i.e., incentive salience) towards rewarding stimuli, but also for aversive motivational salience, which directs behavior away from undesirable stimuli.[26][27][28] D1-type medium spiny neurons within the NAcc shell confer incentive salience for rewarding stimuli, while D2-type medium spiny neurons within the NAcc shell confer aversive motivational salience for undesirable stimuli.[27][28]

Robinson and Berridge's incentive-sensitization theory (1993) proposed that reward contains separable psychological components: wanting (incentive) and liking (pleasure). To explain increasing contact with a certain stimulus such as chocolate, there are two independent factors at work – our desire to have the chocolate (wanting) and the pleasure effect of the chocolate (liking). According to Robinson and Berridge, wanting and liking are two aspects of the same process, so rewards are usually wanted and liked to the same degree. However, wanting and liking also change independently under certain circumstances. For example, rats that do not eat after receiving dopamine (experiencing a loss of desire for food) act as though they still like food. In another example, activated self-stimulation electrodes in the lateral hypothalamus of rats increase appetite, but also cause more adverse reactions to tastes such as sugar and salt; apparently, the stimulation increases wanting but not liking. Such results demonstrate that our reward system includes independent processes of wanting and liking. The wanting component is thought to be controlled by dopaminergic pathways, whereas the liking component is thought to be controlled by opiate-benzodiazepine systems.[8]

Animals vs. humansEdit

Animals quickly learn to press a bar to obtain an injection of opiates directly into the midbrain tegmentum or the nucleus accumbens. The same animals do not work to obtain the opiates if the dopaminergic neurons of the mesolimbic pathway are inactivated. In this perspective, animals, like humans, engage in behaviors that increase dopamine release.

Kent Berridge, a researcher in affective neuroscience, found that sweet (liked ) and bitter (disliked ) tastes produced distinct orofacial expressions, and these expressions were similarly displayed by human newborns, orangutans, and rats. This was evidence that pleasure (specifically, liking) has objective features and was essentially the same across various animal species. Most neuroscience studies have shown that the more dopamine released by the reward, the more effective the reward is. This is called the hedonic impact, which can be changed by the effort for the reward and the reward itself. Berridge discovered that blocking dopamine systems did not seem to change the positive reaction to something sweet (as measured by facial expression). In other words, the hedonic impact did not change based on the amount of sugar. This discounted the conventional assumption that dopamine mediates pleasure. Even with more-intense dopamine alterations, the data seemed to remain constant.[29]

Berridge developed the incentive salience hypothesis to addresses the wanting aspect of rewards. It explains the compulsive use of drugs by drug addicts even when the drug no longer produces euphoria, and the cravings experienced even after the individual has finished going through withdrawal. Some addicts respond to certain stimuli involving neural changes caused by drugs. This sensitization in the brain is similar to the effect of dopamine because wanting and liking reactions occur. Human and animal brains and behaviors experience similar changes regarding reward systems because these systems are so prominent.[29]


Skinner box

The first clue to the presence of a reward system in the brain came with an accident discovery by James Olds and Peter Milner in 1954. They discovered that rats would perform behaviors such as pressing a bar, to administer a brief burst of electrical stimulation to specific sites in their brains. This phenomenon is called intracranial self-stimulation or brain stimulation reward. Typically, rats will press a lever hundreds or thousands of times per hour to obtain this brain stimulation, stopping only when they are exhausted. While trying to teach rats how to solve problems and run mazes, stimulation of certain regions of the brain where the stimulation was found seemed to give pleasure to the animals. They tried the same thing with humans and the results were similar. The explanation to why animals engage in a behavior that has no value to the survival of either themselves or their species is that the brain stimulation is activating the system underlying reward.[30]

In a fundamental discovery made in 1954, researchers James Olds and Peter Milner found that low-voltage electrical stimulation of certain regions of the brain of the rat acted as a reward in teaching the animals to run mazes and solve problems.[31][32] It seemed that stimulation of those parts of the brain gave the animals pleasure,[31] and in later work humans reported pleasurable sensations from such stimulation. When rats were tested in Skinner boxes where they could stimulate the reward system by pressing a lever, the rats pressed for hours.[32] Research in the next two decades established that dopamine is one of the main chemicals aiding neural signaling in these regions, and dopamine was suggested to be the brain's "pleasure chemical".[33]

Ivan Pavlov was a psychologist who used the reward system to study classical conditioning. Pavlov used the reward system by rewarding dogs with food after they had heard a bell or another stimulus. Pavlov was rewarding the dogs so that the dogs associated food, the reward, with the bell, the stimulus.[34] Edward L. Thorndike used the reward system to study operant conditioning. He began by putting cats in a puzzle box and placing food outside of the box so that the cat wanted to escape. The cats worked to get out of the puzzle box to get to the food. Although the cats ate the food after they escaped the box, Thorndike learned that the cats attempted to escape the box without the reward of food. Thorndike used the rewards of food and freedom to stimulate the reward system of the cats. Thorndike used this to see how the cats learned to escape the box.[35]

Clinical significanceEdit


ΔFosB (delta FosB), a gene transcription factor, is the common factor among virtually all forms of addiction (behavioral addictions and drug addictions) that, when overexpressed in D1-type medium spiny neurons in the nucleus accumbens, induces addiction-related behavior and neural plasticity. In particular, ΔFosB promotes self-administration, reward sensitization, and reward cross-sensitization effects among specific addictive drugs and behaviors.

Addictive drugs and behaviors are rewarding and reinforcing (i.e., are addictive) due to their effects on the dopamine reward pathway.[13][36]

The lateral hypothalamus and medial forebrain bundle has been the most-frequently-studied brain-stimulation reward site, particularly in studies of the effects of drugs on brain stimulation reward.[37] The neurotransmitter system that has been most-clearly identified with the habit-forming actions of drugs-of-abuse is the mesolimbic dopamine system, with its efferent targets in the nucleus accumbens and its local GABAergic afferents. The reward-relevant actions of amphetamine and cocaine are in the dopaminergic synapses of the nucleus accumbens and perhaps the medial prefrontal cortex. Rats also learn to lever-press for cocaine injections into the medial prefrontal cortex, which works by increasing dopamine turnover in the nucleus accumbens.[38][39] Nicotine infused directly into the nucleus accumbens also enhances local dopamine release, presumably by a presynaptic action on the dopaminergic terminals of this region. Nicotinic receptors localize to dopaminergic cell bodies and local nicotine injections increase dopaminergic cell firing that is critical for nicotinic reward.[40][41] Some additional habit-forming drugs are also likely to decrease the output of medium spiny neurons as a consequence, despite activating dopaminergic projections. For opiates, the lowest-threshold site for reward effects involves actions on GABAergic neurons in the ventral tegmental area, a secondary site of opiate-rewarding actions on medium spiny output neurons of the nucleus accumbens. Thus GABAergic afferents to the mesolimbic dopamine neurons (primary substrate of opiate reward), the mesolimbic dopamine neurons themselves (primary substrate of psychomotor stimulant reward), and GABAergic efferents to the mesolimbic dopamine neurons (a secondary site of opiate reward) form the core of currently characterized drug-reward circuitry.[37]


See alsoEdit


  1. ^ a b c d e f g h i j k l m n o p q r s t u Schultz W (2015). "Neuronal reward and decision signals: from theories to data". Physiological Reviews. 95 (3): 853–951. doi:10.1152/physrev.00023.2014. PMC 4491543 . PMID 26109341. Rewards in operant conditioning are positive reinforcers. ... Operant behavior gives a good definition for rewards. Anything that makes an individual come back for more is a positive reinforcer and therefore a reward. Although it provides a good definition, positive reinforcement is only one of several reward functions. ... Rewards are attractive. They are motivating and make us exert an effort. ... Rewards induce approach behavior, also called appetitive or preparatory behavior, and consummatory behavior. ... Thus any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward. ... Rewarding stimuli, objects, events, situations, and activities consist of several major components. First, rewards have basic sensory components (visual, auditory, somatosensory, gustatory, and olfactory) ... Second, rewards are salient and thus elicit attention, which are manifested as orienting responses (FIGURE 1, middle). The salience of rewards derives from three principal factors, namely, their physical intensity and impact (physical salience), their novelty and surprise (novelty/surprise salience), and their general motivational impact shared with punishers (motivational salience). A separate form not included in this scheme, incentive salience, primarily addresses dopamine function in addiction and refers only to approach behavior (as opposed to learning) ... Third, rewards have a value component that determines the positively motivating effects of rewards and is not contained in, nor explained by, the sensory and attentional components (FIGURE 1, right). This component reflects behavioral preferences and thus is subjective and only partially determined by physical parameters. Only this component constitutes what we understand as a reward. It mediates the specific behavioral reinforcing, approach generating, and emotional effects of rewards that are crucial for the organism’s survival and reproduction, whereas all other components are only supportive of these functions. ... Rewards can also be intrinsic to behavior (31, 546, 547). They contrast with extrinsic rewards that provide motivation for behavior and constitute the essence of operant behavior in laboratory tests. Intrinsic rewards are activities that are pleasurable on their own and are undertaken for their own sake, without being the means for getting extrinsic rewards. ... Intrinsic rewards are genuine rewards in their own right, as they induce learning, approach, and pleasure, like perfectioning, playing, and enjoying the piano. Although they can serve to condition higher order rewards, they are not conditioned, higher order rewards, as attaining their reward properties does not require pairing with an unconditioned reward. ... These emotions are also called liking (for pleasure) and wanting (for desire) in addiction research (471) and strongly support the learning and approach generating functions of reward. 
  2. ^ Malenka RC, Nestler EJ, Hyman SE (2009). "Chapter 15: Reinforcement and Addictive Disorders". In Sydor A, Brown RY. Molecular Neuropharmacology: A Foundation for Clinical Neuroscience (2nd ed.). New York: McGraw-Hill Medical. pp. 364–375. ISBN 9780071481274. 
  3. ^ Nestler EJ (December 2013). "Cellular basis of memory for addiction". Dialogues Clin. Neurosci. 15 (4): 431–443. PMC 3898681 . PMID 24459410. Despite the importance of numerous psychosocial factors, at its core, drug addiction involves a biological process: the ability of repeated exposure to a drug of abuse to induce changes in a vulnerable brain that drive the compulsive seeking and taking of drugs, and loss of control over drug use, that define a state of addiction. ... A large body of literature has demonstrated that such ΔFosB induction in D1-type [nucleus accumbens] neurons increases an animal's sensitivity to drug as well as natural rewards and promotes drug self-administration, presumably through a process of positive reinforcement ... Another ΔFosB target is cFos: as ΔFosB accumulates with repeated drug exposure it represses c-Fos and contributes to the molecular switch whereby ΔFosB is selectively induced in the chronic drug-treated state.41. ... Moreover, there is increasing evidence that, despite a range of genetic risks for addiction across the population, exposure to sufficiently high doses of a drug for long periods of time can transform someone who has relatively lower genetic loading into an addict. 
  4. ^ "Glossary of Terms". Mount Sinai School of Medicine. Department of Neuroscience. Retrieved 9 February 2015. 
  5. ^ Volkow ND, Koob GF, McLellan AT (January 2016). "Neurobiologic Advances from the Brain Disease Model of Addiction". N. Engl. J. Med. 374 (4): 363–371. doi:10.1056/NEJMra1511480. PMID 26816013. Substance-use disorder: A diagnostic term in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) referring to recurrent use of alcohol or other drugs that causes clinically and functionally significant impairment, such as health problems, disability, and failure to meet major responsibilities at work, school, or home. Depending on the level of severity, this disorder is classified as mild, moderate, or severe.
    Addiction: A term used to indicate the most severe, chronic stage of substance-use disorder, in which there is a substantial loss of self-control, as indicated by compulsive drug taking despite the desire to stop taking the drug. In the DSM-5, the term addiction is synonymous with the classification of severe substance-use disorder.
  6. ^ a b c d e Berridge KC, Kringelbach ML (May 2015). "Pleasure systems in the brain". Neuron. 86 (3): 646–664. doi:10.1016/j.neuron.2015.02.018. PMC 4425246 . PMID 25950633. In the prefrontal cortex, recent evidence indicates that the [orbitofrontal cortex] OFC and insula cortex may each contain their own additional hot spots (D.C. Castro et al., Soc. Neurosci., abstract). In specific subregions of each area, either opioid-stimulating or orexin-stimulating microinjections appear to enhance the number of liking reactions elicited by sweetness, similar to the [nucleus accumbens] NAc and [ventral pallidum] VP hot spots. Successful confirmation of hedonic hot spots in the OFC or insula would be important and possibly relevant to the orbitofrontal mid-anterior site mentioned earlier that especially tracks the subjective pleasure of foods in humans (Georgiadis et al., 2012; Kringelbach, 2005; Kringelbach et al., 2003; Small et al., 2001; Veldhuizen et al., 2010). Finally, in the brainstem, a hindbrain site near the parabrachial nucleus of dorsal pons also appears able to contribute to hedonic gains of function (Söderpalm and Berridge, 2000). A brainstem mechanism for pleasure may seem more surprising than forebrain hot spots to anyone who views the brainstem as merely reflexive, but the pontine parabrachial nucleus contributes to taste, pain, and many visceral sensations from the body and has also been suggested to play an important role in motivation (Wu et al., 2012) and in human emotion (especially related to the somatic marker hypothesis) (Damasio, 2010). 
  7. ^ "Dopamine Involved In Aggression". Medical News Today. 15 January 2008. Retrieved 14 November 2010. 
  8. ^ a b c d e Kolb B, Whishaw IQ (2001). An Introduction to Brain and Behavior (1st ed.). New York: Worth. pp. 438–441. ISBN 9780716751694. 
  9. ^ a b Salamone JD, Correa M. "The Mysterious Motivational Functions of Mesolimbic Dopamine". Neuron. 76 (3): 470–485. doi:10.1016/j.neuron.2012.10.021. PMC 4450094 . PMID 23141060. 
  10. ^ a b c d e f g Yager LM, Garcia AF, Wunsch AM, Ferguson SM (August 2015). "The ins and outs of the striatum: Role in drug addiction". Neuroscience. 301: 529–541. doi:10.1016/j.neuroscience.2015.06.033. PMC 4523218 . PMID 26116518. [The striatum] receives dopaminergic inputs from the ventral tegmental area (VTA) and the substantia nigra (SNr) and glutamatergic inputs from several areas, including the cortex, hippocampus, amygdala, and thalamus (Swanson, 1982; Phillipson and Griffiths, 1985; Finch, 1996; Groenewegen et al., 1999; Britt et al., 2012). These glutamatergic inputs make contact on the heads of dendritic spines of the striatal GABAergic medium spiny projection neurons (MSNs) whereas dopaminergic inputs synapse onto the spine neck, allowing for an important and complex interaction between these two inputs in modulation of MSN activity ... It should also be noted that there is a small population of neurons in the [nucleus accumbens] NAc that coexpress both D1 and D2 receptors, though this is largely restricted to the NAc shell (Bertran- Gonzalez et al., 2008). ... Neurons in the NAc core and NAc shell subdivisions also differ functionally. The NAc core is involved in the processing of conditioned stimuli whereas the NAc shell is more important in the processing of unconditioned stimuli; Classically, these two striatal MSN populations are thought to have opposing effects on basal ganglia output. Activation of the dMSNs causes a net excitation of the thalamus resulting in a positive cortical feedback loop; thereby acting as a 'go’ signal to initiate behavior. Activation of the iMSNs, however, causes a net inhibition of thalamic activity resulting in a negative cortical feedback loop and therefore serves as a 'brake’ to inhibit behavior ... there is also mounting evidence that iMSNs play a role in motivation and addiction (Lobo and Nestler, 2011; Grueter et al., 2013). For example, optogenetic activation of NAc core and shell iMSNs suppressed the development of a cocaine CPP whereas selective ablation of NAc core and shell iMSNs ... enhanced the development and the persistence of an amphetamine CPP (Durieux et al., 2009; Lobo et al., 2010). These findings suggest that iMSNs can bidirectionally modulate drug reward. ... Together these data suggest that iMSNs normally act to restrain drug-taking behavior and recruitment of these neurons may in fact be protective against the development of compulsive drug use. 
  11. ^ Taylor SB, Lewis CR, Olive MF (2013). "The neurocircuitry of illicit psychostimulant addiction: acute and chronic effects in humans". Subst Abuse Rehabil. 4: 29–43. doi:10.2147/SAR.S39684. PMC 3931688 . PMID 24648786. Regions of the basal ganglia, which include the dorsal and ventral striatum, internal and external segments of the globus pallidus, subthalamic nucleus, and dopaminergic cell bodies in the substantia nigra, are highly implicated not only in fine motor control but also in [prefrontal cortex] PFC function.43 Of these regions, the [nucleus accumbens] NAc (described above) and the [dorsal striatum] DS (described below) are most frequently examined with respect to addiction. Thus, only a brief description of the modulatory role of the basal ganglia in addiction-relevant circuits will be mentioned here. The overall output of the basal ganglia is predominantly via the thalamus, which then projects back to the PFC to form cortico-striatal-thalamo-cortical (CSTC) loops. Three CSTC loops are proposed to modulate executive function, action selection, and behavioral inhibition. In the dorsolateral prefrontal circuit, the basal ganglia primarily modulate the identification and selection of goals, including rewards.44 The [orbitofrontal cortex] OFC circuit modulates decision-making and impulsivity, and the anterior cingulate circuit modulates the assessment of consequences.44 These circuits are modulated by dopaminergic inputs from the [ventral tegmental area] VTA to ultimately guide behaviors relevant to addiction, including the persistence and narrowing of the behavioral repertoire toward drug seeking, and continued drug use despite negative consequences.43–45 
  12. ^ Grall-Bronnec M, Sauvaget A (2014). "The use of repetitive transcranial magnetic stimulation for modulating craving and addictive behaviours: a critical literature review of efficacy, technical and methodological considerations". Neurosci. Biobehav. Rev. 47: 592–613. doi:10.1016/j.neubiorev.2014.10.013. PMID 25454360. Studies have shown that cravings are underpinned by activation of the reward and motivation circuits (McBride et al., 2006, Wang et al., 2007, Wing et al., 2012, Goldman et al., 2013, Jansen et al., 2013 and Volkow et al., 2013). According to these authors, the main neural structures involved are: the nucleus accumbens, dorsal striatum, orbitofrontal cortex, anterior cingulate cortex, dorsolateral prefrontal cortex (DLPFC), amygdala, hippocampus and insula. 
  13. ^ a b Malenka RC, Nestler EJ, Hyman SE (2009). Sydor A, Brown RY, eds, eds. Molecular Neuropharmacology: A Foundation for Clinical Neuroscience (2nd ed.). New York: McGraw-Hill Medical. pp. 365–366, 376. ISBN 978-0-07-148127-4. The neural substrates that underlie the perception of reward and the phenomenon of positive reinforcement are a set of interconnected forebrain structures called brain reward pathways; these include the nucleus accumbens (NAc; the major component of the ventral striatum), the basal forebrain (components of which have been termed the extended amygdala, as discussed later in this chapter), hippocampus, hypothalamus, and frontal regions of cerebral cortex. These structures receive rich dopaminergic innervation from the ventral tegmental area (VTA) of the midbrain. Addictive drugs are rewarding and reinforcing because they act in brain reward pathways to enhance either dopamine release or the effects of dopamine in the NAc or related structures, or because they produce effects similar to dopamine. ... A macrostructure postulated to integrate many of the functions of this circuit is described by some investigators as the extended amygdala. The extended amygdala is said to comprise several basal forebrain structures that share similar morphology, immunocytochemical features, and connectivity and that are well suited to mediating aspects of reward function; these include the bed nucleus of the stria terminalis, the central medial amygdala, the shell of the NAc, and the sublenticular substantia innominata. 
  14. ^ a b Richard JM, Castro DC, Difeliceantonio AG, Robinson MJ, Berridge KC (November 2013). "Mapping brain circuits of reward and motivation: in the footsteps of Ann Kelley". Neurosci. Biobehav. Rev. 37 (9 Pt A): 1919–1931. doi:10.1016/j.neubiorev.2012.12.008. PMC 3706488 . PMID 23261404.
    Figure 3: Neural circuits underlying motivated 'wanting' and hedonic 'liking'.
  15. ^ Luo M, Zhou J, Liu Z (August 2015). "Reward processing by the dorsal raphe nucleus: 5-HT and beyond". Learn. Mem. 22 (9): 452–460. doi:10.1101/lm.037317.114. PMC 4561406 . PMID 26286655. 
  16. ^ Moulton EA, Elman I, Becerra LR, Goldstein RZ, Borsook D (May 2014). "The cerebellum and addiction: insights gained from neuroimaging research". Addict. Biol. 19 (3): 317–331. doi:10.1111/adb.12101. PMC 4031616 . PMID 24851284. 
  17. ^ Caligiore D, Pezzulo G, Baldassarre G, Bostan AC, Strick PL, Doya K, Helmich RC, Dirkx M, Houk J, Jörntell H, Lago-Rodriguez A, Galea JM, Miall RC, Popa T, Kishore A, Verschure PF, Zucca R, Herreros I (February 2017). "Consensus Paper: Towards a Systems-Level View of Cerebellar Function: the Interplay Between Cerebellum, Basal Ganglia, and Cortex". Cerebellum. 16 (1): 203–229. doi:10.1007/s12311-016-0763-3. PMC 5243918 . PMID 26873754. 
  18. ^ Trantham-Davidson H, Neely LC, Lavin A, Seamans JK (2004). "Mechanisms underlying differential D1 versus D2 dopamine receptor regulation of inhibition in prefrontal cortex". The Journal of Neuroscience. 24 (47): 10652–10659. doi:10.1523/jneurosci.3179-04.2004. PMID 15564581. 
  19. ^ You ZB, Chen YQ, Wise RA (2001). "Dopamine and glutamate release in the nucleus accumbens and ventral tegmental area of rat following lateral hypothalamic self-stimulation". Neuroscience. 107 (4): 629–639. doi:10.1016/s0306-4522(01)00379-7. PMID 11720786. 
  20. ^ Wise RA, Rompre PP (1989). "Brain dopamine and reward". Annual Review of Psychology. 40: 191–225. doi:10.1146/ PMID 2648975. 
  21. ^ Wise RA (October 2002). "Brain reward circuitry: insights from unsensed incentives". Neuron. 36 (2): 229–240. doi:10.1016/S0896-6273(02)00965-0. PMID 12383779. 
  22. ^ a b Berridge KC, Kringelbach ML (1 June 2013). "Neuroscience of affect: brain mechanisms of pleasure and displeasure". Current Opinion in Neurobiology. 23 (3): 294–303. doi:10.1016/j.conb.2013.01.017. PMC 3644539 . PMID 23375169. For instance, mesolimbic dopamine, probably the most popular brain neurotransmitter candidate for pleasure two decades ago, turns out not to cause pleasure or liking at all. Rather dopamine more selectively mediates a motivational process of incentive salience, which is a mechanism for wanting rewards but not for liking them .... Rather opioid stimulation has the special capacity to enhance liking only if the stimulation occurs within an anatomical hotspot 
  23. ^ Kringelbach ML, Berridge KC (2012). "The Joyful Mind" (PDF). Scientific American: 44–45. Retrieved 17 January 2017. So it makes sense that the real pleasure centers in the brain – those directly responsible for generating pleasurable sensations – turn out to lie within some of the structures previously identified as part of the reward circuit. One of these so-called hedonic hotspots lies in a subregion of the nucleus accumbens called the medial shell. A second is found within the ventral pallidum, a deep-seated structure near the base of the forebrain that receives most of its signals from the nucleus accumbens. ...
         On the other hand, intense euphoria is harder to come by than everyday pleasures. The reason may be that strong enhancement of pleasure – like the chemically induced pleasure bump we produced in lab animals – seems to require activation of the entire network at once. Defection of any single component dampens the high.
         Whether the pleasure circuit – and in particular, the ventral pallidum – works the same way in humans is unclear.
  24. ^ a b Berridge KC (April 2012). "From prediction error to incentive salience: mesolimbic computation of reward motivation". Eur. J. Neurosci. 35 (7): 1124–1143. doi:10.1111/j.1460-9568.2012.07990.x. PMC 3325516 . PMID 22487042. Here I discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g. drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. ... Associative learning and prediction are important contributors to motivation for rewards. Learning gives incentive value to arbitrary cues such as a Pavlovian conditioned stimulus (CS) that is associated with a reward (unconditioned stimulus or UCS). Learned cues for reward are often potent triggers of desires. For example, learned cues can trigger normal appetites in everyone, and can sometimes trigger compulsive urges and relapse in addicts.
    Cue-triggered 'wanting’ for the UCS
    A brief CS encounter (or brief UCS encounter) often primes a pulse of elevated motivation to obtain and consume more reward UCS. This is a signature feature of incentive salience.
    Cue as attractive motivational magnets
    When a Pavlovian CS+ is attributed with incentive salience it not only triggers 'wanting’ for its UCS, but often the cue itself becomes highly attractive – even to an irrational degree. This cue attraction is another signature feature of incentive salience ... Two recognizable features of incentive salience are often visible that can be used in neuroscience experiments: (i) UCS-directed 'wanting’ – CS-triggered pulses of intensified 'wanting’ for the UCS reward; and (ii) CS-directed 'wanting’ – motivated attraction to the Pavlovian cue, which makes the arbitrary CS stimulus into a motivational magnet.
  25. ^ Malenka RC, Nestler EJ, Hyman SE (2009). Sydor A, Brown RY, eds, eds. Molecular Neuropharmacology: A Foundation for Clinical Neuroscience (2nd ed.). New York: McGraw-Hill Medical. pp. 147–148, 367, 376. ISBN 978-0-07-148127-4. VTA DA neurons play a critical role in motivation, reward-related behavior (Chapter 15), attention, and multiple forms of memory. This organization of the DA system, wide projection from a limited number of cell bodies, permits coordinated responses to potent new rewards. Thus, acting in diverse terminal fields, dopamine confers motivational salience ("wanting") on the reward itself or associated cues (nucleus accumbens shell region), updates the value placed on different goals in light of this new experience (orbital prefrontal cortex), helps consolidate multiple forms of memory (amygdala and hippocampus), and encodes new motor programs that will facilitate obtaining this reward in the future (nucleus accumbens core region and dorsal striatum). In this example, dopamine modulates the processing of sensorimotor information in diverse neural circuits to maximize the ability of the organism to obtain future rewards. ...
    The brain reward circuitry that is targeted by addictive drugs normally mediates the pleasure and strengthening of behaviors associated with natural reinforcers, such as food, water, and sexual contact. Dopamine neurons in the VTA are activated by food and water, and dopamine release in the NAc is stimulated by the presence of natural reinforcers, such as food, water, or a sexual partner. ...
    The NAc and VTA are central components of the circuitry underlying reward and memory of reward. As previously mentioned, the activity of dopaminergic neurons in the VTA appears to be linked to reward prediction. The NAc is involved in learning associated with reinforcement and the modulation of motoric responses to stimuli that satisfy internal homeostatic needs. The shell of the NAc appears to be particularly important to initial drug actions within reward circuitry; addictive drugs appear to have a greater effect on dopamine release in the shell than in the core of the NAc.
  26. ^ Salamone JD, Correa M (8 November 2012). "The mysterious motivational functions of mesolimbic dopamine". Neuron. 76 (3): 470–485. doi:10.1016/j.neuron.2012.10.021. ISSN 1097-4199. PMC 4450094 . PMID 23141060. 
  27. ^ a b Calipari ES, Bagot RC, Purushothaman I, Davidson TJ, Yorgason JT, Peña CJ, Walker DM, Pirpinias ST, Guise KG, Ramakrishnan C, Deisseroth K, Nestler EJ (March 2016). "In vivo imaging identifies temporal signature of D1 and D2 medium spiny neurons in cocaine reward". Proc. Natl. Acad. Sci. U.S.A. 113 (10): 2726–2731. doi:10.1073/pnas.1521238113. PMC 4791010 . PMID 26831103. Previous work has demonstrated that optogenetically stimulating D1 [medium spiny neurons] MSNs promotes reward, whereas stimulating D2 MSNs produces aversion. ... Studies using in vivo pharmacological approaches have demonstrated differential roles of NAc D1 and D2 receptors in drug conditioning by use of selective receptor agonists or antagonists, further supporting a role for both dopamine and D1 and D2 MSN subtypes in associative learning (27). Whereas this work has focused on the VTA-to-NAc dopamine circuit, tracking postsynaptic responses in NAc MSNs is particularly important because they integrate information not only from VTA dopamine neurons but also from numerous glutamatergic projections (28, 29). From a network perspective, D1 and D2 MSNs receive inputs from several regions known to encode and store information about context or context–drug associations such as the prefrontal cortex, basolateral amygdala, and hippocampus (30). ... Our data highlight the important role played by D1 MSNs in NAc core in establishing context–reward associations and in controlling the strength of these associations after cocaine exposure. ... Here we show that regulation of associative learning, and its dysregulation by cocaine, is driven primarily through alterations in D1 MSNs in NAc core, which both impair the extinction of previously learned associations and enhance reinstatement following abstinence. 
  28. ^ a b Baliki MN, Mansour A, Baria AT, Huang L, Berger SE, Fields HL, Apkarian AV (October 2013). "Parceling human accumbens into putative core and shell dissociates encoding of values for reward and pain". J. Neurosci. 33 (41): 16383–16393. doi:10.1523/JNEUROSCI.1731-13.2013. PMC 3792469 . PMID 24107968. Recent evidence indicates that inactivation of D2 receptors, in the indirect striatopallidal pathway in rodents, is necessary for both acquisition and expression of aversive behavior, and direct pathway D1 receptor activation controls reward-based learning (Hikida et al., 2010; Hikida et al., 2013). It seems we can conclude that direct and indirect pathways of the NAc, via D1 and D2 receptors, subserve distinct anticipation and valuation roles in the shell and core of NAc, which is consistent with observations regarding spatial segregation and diversity of responses of midbrain dopaminergic neurons for rewarding and aversive conditions, some encoding motivational value, others motivational salience, each connected with distinct brain networks and having distinct roles in motivational control (Bromberg-Martin et al., 2010; Cohen et al., 2012; Lammel et al., 2013). ... Thus, the previous results, coupled with the current observations, imply that the NAc pshell response reflects a prediction/anticipation or salience signal, and the NAc pcore response is a valuation response (reward predictive signal) that signals the negative reinforcement value of cessation of pain (i.e., anticipated analgesia). 
  29. ^ a b Berridge KC, Kringelbach ML (2008). "Affective neuroscience of pleasure: reward in humans and animals" (PDF). Psychopharmacology. 199 (3): 457–480. doi:10.1007/s00213-008-1099-6. ISSN 0033-3158. PMC 3004012 . PMID 18311558. Retrieved 20 October 2012. 
  30. ^ Wise RA (1996). "Addictive drugs and brain stimulation reward". Annu. Rev. Neurosci. 19: 319–340. doi:10.1146/ PMID 8833446. 
  31. ^ a b "human nervous system". 
  32. ^ a b "Positive Reinforcement Produced by Electrical Stimulation of Septal Area and Other Regions of Rat Brain". 
  33. ^ "The Functional Neuroanatomy of Pleasure and Happiness". 
  34. ^ Ivan Petrovich Pavlov; G. V. Anrep (2003). Conditioned Reflexes. Courier Corporation. pp. 1–. ISBN 978-0-486-43093-5. 
  35. ^ Fridlund, Alan and James Kalat. Mind and Brain, the Science of Psychology. California: Cengage Learning, 2014. Print.
  36. ^ Rang HP (2003). Pharmacology. Edinburgh: Churchill Livingstone. p. 596. ISBN 0-443-07145-4. 
  37. ^ a b Roy A. Wise, Drug-activation of brain reward pathways, Drug and Alcohol Dependence 1998; 51 13-22.
  38. ^ Goeders N.E., Smith J.E. (1983). "Cortical dopaminergic involvement in cocaine reinforcement". Science. 221: 773–775. doi:10.1126/science.6879176. 
  39. ^ Goeders N.E., Smith J.E. (1993). "Intracranial cocaine self-administration into the medial prefrontal cortex increases dopamine turnover in the nucleus accumbens". J. Pharmacol. Exp. Ther. 265: 592–600. 
  40. ^ Clarke, Hommer D.W., Pert A., Skirboll L.R. (1985). "Electrophysiological actions of nicotine on substantia nigra single units". Br. J. Pharmacol. 85: 827–835. doi:10.1111/j.1476-5381.1985.tb11081.x. 
  41. ^ Westfall, C., Grant, H., Perry, H., 1988. Release of dopamine and 5-hydroxytryptamine from rat striatal slices following activation of nicotinic cholinergic receptors. Gen. Pharmacol. 14, 321–325.

External linksEdit