Investigating Decision-making and Reward In Schizophrenia 

Our Research

Neuroimaging Studies of Reinforcement Processing in Schizophrenia

We have investigated physiological responses to reinforcement in schizophrenia patients using simple paradigms designed to contrasts brain responses to positive outcomes vs. negative outcomes, for example, or brain responses to expected outcomes vs. unexpected outcomes (called reward prediction errors). This work is being done in collaboration with the Neuroimaging Research Branch at the National Institute on Drug Abuse, directed by Elliot Stein.

In one experimental paradigm, adapted from a study by McClure et al. (2003), we showed subjects a light stimulus, followed by a small squirt of juice (a primary reinforcer) at a standard interval of 6 seconds, on roughly 3/4 of trials. On 1/4 of trials, however, the delivery of the juice reward was delayed 4-7 seconds (and thus delivered 10-13 seconds after the light stimulus). This unexpected delay of the reward was designed to evoke a negative reward prediction error (a worse-than-expected-event) at the expected delivery time, followed by a positive reward prediction error (a better-than-expected-event) at the actual delivery time.

Based on evidence that the signaling of reward prediction errors depends on intact dopamine system function, and that abnormal dopamine system function is characteristic of schizophrenia, we expected that SZ patients would show blunted brain responses to prediction errors. We identified numerous brain areas that distinguished between positive and negative prediction errors, including the midbrain, the left and right putamen, and in gustatory cortex, bilaterally.
Importantly, neural responses to positive prediction errors (unexpected deliveries of the juice reward) were blunted in schizophrenia patients in all of these areas. Furthermore, the magnitude of the response to the juice reward was negative correlated with the severity of motivational deficits in patients (patients with the highest ratings for avolition showed the most blunted neural responses to the reward). By contrast, neural responses to negative prediction errors (unexpected omissions of the juice reward) appeared intact in all of these regions.We interpreted this result as indicating that the signaling of positive reward prediction errors, via phasic dopamine bursts in the BG, is likely disrupted in SZ, whereas the signaling of negative reward prediction errors through phasic dopamine dips might be intact in SZ.

We have continued to investigate the nature of reward system function in schizophrenia using a variety of other paradigms looking at brain responses to the anticipation vs. the receipt of rewards, e.g., as well as testing responses to symbolic reinforcers, such as money.Through a collaboration with the Section of Human Neurogenetics at the National Institute on Alcohol Abuse and Alcoholism (David Goldman, Chief), we are also investigating genetic predictors of reward system responses in schizophrenia and controls, particularly in terms of genes that code for different aspects of dopamine system function.




Behavioral Studies of Reinforcement Learning in Schizophrenia

Our behavioral experiments, which use novel paradigms in conjunction with computational modeling methods, have several goals. In one set of experiments, we contrast gradual/procedural reinforcement learning, thought to depend on the BG, with rapid/explicit reinforcement learning, thought to depend on OFC. Through other means, we contrast positive-feedback-driven (Go) learning, thought to depend on transmission at D1-type dopamine receptors in the BG, with negative-feedback-driven (NoGo) learning, thought to depend on transmission at D2-type receptors in the BG. Whereas gradual/procedural reinforcement learning can be assessed through performance improvement across sessions, and in preferences in novel pairings in a test phase, following successful acquisition. Rapid/explicit reinforcement learning can be assessed either through performance improvements in very early trials, or through characterizing trial-to-trial adjustments in behavior (do subjects stay with a response following positive feedback, or do they switch a different response following negative feedback?).


Based on modeling work from Michael Frank, we have predicted that schizophrenia patients would show deficits in rapid reinforcement learning dependent on explicit representations of outcomes in OFC. Furthermore, we have hypothesized that SZ patients would also have difficulty learning from phasic dopamine bursts in the BG (resulting from better-than-expected outcomes, called positive reward prediction errors), due to abnormally high tonic levels of dopamine activity. We proposed that procedural learning based on phasic dopamine dips in the BG (resulting from worse-than-expected outcomes, called negative reward prediction errors), might be intact, aiding overall performance on procedural learning tasks.

In a task requiring subjects to learn three probabilistic discriminations concurrently (with the best stimulus in each pair having a reward probability from 60-80%), we found that SZ patients did in fact show deficits in rapid reinforcement learning and impaired trial-to-trial adjustments (see Waltz et al., 2007). In the post-acquisition test phase, we found that patients also showed a reduced preference for the most-frequently-rewarded stimulus, in novel pairings with less-frequently-rewarded stimuli, suggestive of a deficit in gradual “Go” learning (see Panel B below, middle set of bars). By contrast, we found that patients showed a normal avoidance of the most-frequently-punished stimulus, in novel pairings with more-frequently-rewarded stimuli, suggestive of intact gradual “NoGo” learning.

In a subsequent experiment, we had subjects learn probabilistic discriminations (best stimulus rewarded 80% of the time), one at a time, and then reversed the reward contingencies after subjects reached a performance criterion. We found that SZ patients were less able to adjust to the rapid shift in reward contingencies, achieving fewer reversals (see figure below), and making significantly more errors in attempting to achieve reversals. The dependence of rapid reversal learning on ventral regions of prefrontal cortex and the BG is well-established, and we interpreted this finding as further evidence of an OFC contribution to reinforcement learning deficits in SZ.

We are now conducting follow-up experiments using a variety of paradigms to further pinpoint which aspects of reinforcement processing are intact in SZ and which are abnormal.

Our research focuses on the origins of negative symptoms of schizophrenia (SZ), especially anhedonia and motivational deficits. In particular, we am interested in the how schizophrenia patients process and learn from positive and negative outcomes, and how reward processing deficits that patients have might contribute to motivational deficits. Although SZ patients generally show impairments on learning tasks dependent on hypothesis testing and explicit memory for outcomes, they often (but not always) show intact performance on tasks of procedural learning.


Our goal is to be able to characterize specific impairments in both explicit and procedural aspects of reinforcement processing, and their relationships to schizophrenic psychopathology.

We examine these issues using behavioral experiments, in conjunction with computational modeling and functional neuroimaging. These methods allow us to investigate reinforcement learning from the standpoint of its hypothesized neural substrates, including midbrain dopaminergic nuclei, the basal ganglia (BG), and orbitofrontal cortex (OFC). Modeling provides us with a formalized framework for testing hypotheses about neurophysiological mechanisms of reinforcement learning and decision-making. Functional neuroimaging (especially functional magnetic resonance imaging, or fMRI) allow us to examine neural responses to reinforcement directly, including in the absence of behavior. By using current methods and models from cognitive neuroscience, we attempt to create a picture of how disturbances in physiological responses to reinforcement could lead to motivational and functional deficits observed in schizophrenia.

Key publications:


Waltz, J.A., Frank, M.J., Robinson, B.M., and Gold, J.M. (2007). Selective reinforcement learning deficits in schizophrenia support predictions from computational models of striato-cortical dysfunction. Biological Psychiatry 62, 756-764.


Gold, J.M., Waltz, J.A., Prentice, K.J., Morris, S.E., Heerey, E.A. (2008). Reward Processing in Schizophrenia: A Deficit in the Representation of Value. Schizophrenia Bulletin 34, 835-847.


Waltz, J.A., Schweitzer, J.B., Ross, T.J., Kurup, P.K., Salmeron, B.J., Rose, E.J., Gold, J.M., Stein, E.A. (2010). Abnormal responses to monetary outcomes in cortex, but not in the basal ganglia, in schizophrenia. Neuropsychopharmacology, 35, 2427-39.


Gold, J.M. Waltz, J.A., Matveeva, T.M., Kasanova, Z., Strauss, G.P., Herbener, E.S., Collins, A.G.E., Frank, M.J. (2012). Negative symptoms in schizophrenia result from a failure to represent the expected value of rewards: Behavioral and computational modeling evidence. Archives of General Psychiatry, 69, 129-138.


Strauss, G.P., Waltz, J.A., Gold, J.M. (2014). A review of reward processing and motivational impairment in schizophrenia. Schizophrenia Bulletin, 40, S107-16. Epub 2013 Dec 27. PMID: 24375459. PMCID: PMC3934394.

The project described was supported by Grant Number R01MH094460 from the National Institute Of Mental Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute Of Mental Health or the National Institutes of Health.