Abstract
Cognitive control mechanisms support the deliberate regulation of thought and behavior based on current goals. Recent work suggests that motivational incentives improve cognitive control, and has begun to elucidate the brain regions that may support this effect. Here, we conducted a quantitative meta-analysis of neuroimaging studies of motivated cognitive control using activation likelihood estimation (ALE) and Neurosynth in order to delineate the brain regions that are consistently activated across studies. The analysis included functional neuroimaging studies that investigated changes in brain activation during cognitive control tasks when reward incentives were present versus absent. The ALE analysis revealed consistent recruitment in regions associated with the frontoparietal control network including the inferior frontal sulcus (IFS) and intraparietal sulcus (IPS), as well as consistent recruitment in regions associated with the salience network including the anterior insula and anterior mid-cingulate cortex (aMCC). A large-scale exploratory meta-analysis using Neurosynth replicated the ALE results, and also identified the caudate nucleus, nucleus accumbens, medial thalamus, inferior frontal junction/premotor cortex (IFJ/PMC), and hippocampus. Finally, we conducted separate ALE analyses to compare recruitment during cue and target periods, which tap into proactive engagement of rule-outcome associations, and the mobilization of appropriate viscero-motor states to execute a response, respectively. We found that largely distinct sets of brain regions are recruited during cue and target periods. Altogether, these findings suggest that flexible interactions between frontoparietal, salience, and dopaminergic midbrain-striatal networks may allow control demands to be precisely tailored based on expected value.
Introduction
The ability to maintain attention during a lecture, or flexibly shift between writing a report and answering emails, or plan several steps ahead during a chess match all require cognitive control—the capacity to deliberately guide thought and behavior based on goals, especially in the presence of distraction or competing responses (Botvinick et al., 2001; Desimone & Duncan, 1995; Duncan, 2013; Gollwitzer, 1999; Miller & Cohen, 2001; Miyake et al., 2000; Posner & Dehaene, 1994; Posner & DiGirolamo, 1998; Stuss & Knight, 2002). Cognitive control involves several related, yet dissociable abilities (Miyake et al., 2000), including working memory (D'Esposito & Postle, 2015; Funahashi, Chafee, & Goldman-Rakic, 1993; Fuster & Alexander, 1971; Goldman-Rakic, 1987), representation of rules and context (Asaad, Rainer, & Miller, 2000; Bunge, 2004; Cohen & Servan-Schreiber, 1992; Dixon & Christoff, 2012; Koechlin, Ody, & Kouneiher, 2003; Miller & Cohen, 2001; Munakata et al., 2011), conflict and error detection (Botvinick et al., 2001; Ridderinkhof, Ullsperger, Crone, & Nieuwenhuis, 2004; Ullsperger, Danielmeier, & Jocham, 2014), inhibition of pre-potent responses (Aron, Robbins, & Poldrack, 2004), abstract thought and reasoning (Christoff et al., 2009; Christoff et al., 2001; Dias, Robbins, & Roberts, 1996), and set-shifting (Crone, Wendelken, Donohue, & Bunge, 2006; Meiran, 1996; Meiran, 2000; Rushworth, Passingham, & Nobre, 2002).
While early work identified the prefrontal cortex (PFC) as a critical neural substrate (Desimone & Duncan, 1995; Duncan, 2001; Fuster, 1989; Miller & Cohen, 2001; Passingham & Wise, 2012; Stuss & Knight, 2002; Watanabe, 2017), it soon became clear that a much broader network of regions support cognitive control, including posterior parietal, lateral temporal, insular, and mid-cingulate cortices, as well as parts of the basal ganglia. Together, these regions are often referred to as the frontoparietal control network (FPCN) or Multiple Demand system (Cole, Repovs, & Anticevic, 2014; Cole et al., 2013; Cole & Schneider, 2007; Crittenden, Mitchell, & Duncan, 2016; Dixon, Andrews-Hanna, et al., 2017; Dixon, Girn, & Christoff, 2017; Dosenbach et al., 2007; Duncan, 2010; Mitchell et al., 2016; Spreng et al., 2010; Vincent et al., 2008). The FPCN flexibly represents a variety of task-relevant information, and exerts a top-down influence on other regions, guiding activation in accordance with current task demands (Buschman & Miller, 2007; Crowe et al., 2013; Desimone & Duncan, 1995; Dixon, Fox, & Christoff, 2014b; Egner & Hirsch, 2005; Miller & Cohen, 2001; Tomita et al., 1999).
The effects of motivation on cognitive control
As research progressed in delineating the components of cognitive control, a separate stream of inquiry focused on the neural mechanisms of assigning value to stimuli and value-guided decision making (Daw, Niv, & Dayan, 2005; Dixon & Christoff, 2014; Dixon, Thiruchselvam, Todd, & Christoff, 2017; Levy & Glimcher, 2012; O'Doherty, 2004; Rangel, Camerer, & Montague, 2008; Rangel & Hare, 2010; Rushworth et al., 2011; Schoenbaum & Esber, 2010). The past decade has seen a synthesis of these fields with a surge of interest in understanding how value influences the decision of whether or not to engage cognitive control and the efficacy of implementing control (Botvinick & Braver, 2015; Braver et al., 2014; Cohen, Braver, & Brown, 2002; Cools, 2016; Dixon, 2015; Dixon & Christoff, 2012; Hazy, Frank, & O'Reilly R, 2007; McGuire & Botvinick, 2010; O'Reilly, Herd, & Pauli, 2010). This line of inquiry is yielding new insights into mechanisms that allow the desire to achieve a specific outcome to interact with the cognitive processes that are necessary to realize that outcome, and may ultimately provide critical information about pathological conditions that involve altered motivation-cognition interactions including depression, schizophrenia, ADHD, and anxiety (Barkley, 1997; Bishop, Duncan, Brett, & Lawrence, 2004; Chung & Barch, 2015; Davidson, 2000; Heller et al., 2009; Kaiser, Andrews-Hanna, Spielberg, et al., 2015; Kaiser, Andrews-Hanna, Wager, & Pizzagalli, 2015; Nigg & Casey, 2005; Pessoa, 2008; Shackman et al., 2011; Shackman et al., 2016).
Recent studies have shown that individuals are strongly biased towards choosing habits and simple tasks over more complex or demanding tasks that require cognitive control (Botvinick & Braver, 2015; Dixon & Christoff, 2012; Kool, McGuire, Rosen, & Botvinick, 2010; McGuire & Botvinick, 2010). This has led to notion that cognitive control carries an intrinsic effort cost. This effort cost can be offset by the opportunity to acquire a rewarding outcome. Studies have shown that participants are considerably more likely to engage cognitive control if doing so will result in a larger reward than if they chose a habitual action (Dixon & Christoff, 2012; Westbrook, Kester, & Braver, 2013). Thus, cognitive control engagement can be understood as a special case of cost/benefit decision making whereby the expected value of the outcome that will result from engaging cognitive control is weighed against the effort cost of its implementation (Botvinick & Braver, 2015; Dixon & Christoff, 2012; Shenhav, Botvinick, & Cohen, 2013).
Following the decision to engage cognitive control, the opportunity to earn a reward can also influence the efficacy of implementing control processes. In one study, participants performed a modified Stroop task during which they decided whether an image was a building or a house, and had to ignore letters overlaid on the images (Padmala & Pessoa, 2011). The letters could be neutral (XXXXX), congruent with the image (e.g., HOUSE printed over a house image), or incongruent (e.g., BLDNG printed over a house image). Pre-trial cues indicated whether monetary rewards were available or not available, and participants could only earn rewards if performance was fast and accurate. The results demonstrated enhanced implementation of cognitive control, manifest as reduced interference effects on incongruent trials when rewards were available (Padmala & Pessoa, 2011). This incentive effect may reflect a sharpening of the representation of task-relevant information (Etzel et al., 2015; Histed, Pasupathy, & Miller, 2009), thus providing more effective modulation of sensorimotor processes that support performance. Incentive-based facilitation of behavioral performance has been reported across numerous studies using a range of cognitive control paradigms (Chiew & Braver, 2013, 2014; Chiew, Stanek, & Adcock, 2016; Dixon & Christoff, 2012; Etzel et al., 2015; Ivanov et al., 2012; Jimura, Locke, & Braver, 2010; Krebs et al., 2012; Locke & Braver, 2008; Padmala & Pessoa, 2011; Taylor et al., 2004).
The neural basis of motivational effects on cognitive control
Functional neuroimaging studies have identified brain regions associated with the influence of motivation on the implementation of cognitive control (Bahlmann, Aarts, & D'Esposito, 2015; Beck et al., 2010; Engelmann, Damaraju, Padmala, & Pessoa, 2009; Gilbert & Fiez, 2004; Ivanov et al., 2012; Kouneiher, Charron, & Koechlin, 2009; Locke & Braver, 2008; Padmala & Pessoa, 2011; Pochon et al., 2002; Rowe, Eckstein, Braver, & Owen, 2008; Taylor et al., 2004). In one study, Jimura and colleagues (2010) employed a Sternberg task with two types of task blocks. One block consisted of only non-reward trials, while the other block consisted of trials with varying outcomes: no reward, low reward ($0.25), or high reward ($0.75). On each trial participants were presented with a 5-word memory set and then had to indicate whether a subsequent probe word matched one of the items in the memory set. The results demonstrated a shift from transient to sustained activation in lateral prefrontal and parietal cortices during reward versus no reward blocks, and individual differences in reward sensitivity correlated with the magnitude of sustained activation in reward contexts (Jimura et al., 2010).
These results can be interpreted in terms of the dual mechanisms of control (DMC) framework, which suggests that reward incentives shift the type and timing of cognitive control (Braver, 2012; Chiew & Braver, 2013; Jimura et al., 2010). This theory posits two temporally-defined cognitive control mechanisms: (i) a proactive mechanism consisting of sustained activation of task-relevant information (e.g., task rules) across trials, which facilitates the encoding of new information on each trial and the preparation of a target response; and (ii) a reactive mechanism consisting of the stimulus-triggered transient re-activation of rule information on a trial-by-trial basis. Frontoparietal activation dynamics support the idea that reward incentives lead to greater reliance on proactive control, consistent with the DMC model.
Numerous studies have now observed elevated frontoparietal activation when cognitive control is performed in the service of obtaining rewarding outcomes (Boehler et al., 2014; Engelmann et al., 2009; Gilbert & Fiez, 2004; Ivanov et al., 2012; Kouneiher et al., 2009; Locke & Braver, 2008; Padmala & Pessoa, 2011; Paschke et al., 2015; Pochon et al., 2002; Rowe et al., 2008; Soutschek et al., 2015; Taylor et al., 2004). Additionally, frontoparietal regions encode associations between specific rules and expected reward outcomes (Dixon & Christoff, 2012), exhibit more differentiated coding of task rules on incentivized trials (Etzel et al., 2015), and are sensitive to the interaction between control level and reward availability (Bahlmann et al., 2015; Ivanov et al., 2012; Padmala & Pessoa, 2011; Soutschek et al., 2015). These regions are also recruited during value-based decision making, and when participants plan and monitor progress towards future desired outcomes (Crockett et al., 2013; Dixon, Fox, & Christoff, 2014a; Gerlach, Spreng, Madore, & Schacter, 2014; Jimura, Chushak, & Braver, 2013; McClure, Laibson, Loewenstein, & Cohen, 2004). Finally, single cell recordings in non-human primates have revealed reward-contingent enhancement of lateral PFC neural firing related to working memory and task rules (Histed et al., 2009; Leon & Shadlen, 1999; Watanabe, 1996; Watanabe & Sakagami, 2007). Thus, frontoparietal regions may integrate task-relevant information and expected motivational outcomes (Dixon & Christoff, 2014; Pessoa, 2008; Watanabe, 2017; Watanabe & Sakagami, 2007).
The current meta-analysis
While numerous studies of motivated cognitive control have reported activation in frontoparietal regions, the consistency of activations across these studies has yet to be systematically examined. The present study sough to characterize the network of brain regions that are consistently recruited during motivated cognitive control. To this end we used a quantitative approach, activation likelihood estimation (ALE), to identify regions that show consistent recruitment in human neuroimaging studies of cognitive control that included a manipulation of reward incentive availability. We additionally used Neurosynth to identify regions that are consistently recruited in studies that use the term “cognitive control” and in studies that use the term “reward”. While the ALE analysis provides a conservative and rigorous analysis based on a set of carefully selected studies, the Neurosynth analysis provides a complementary perspective based on a liberal exploration of a much wider literature. Finally, we performed two additional exploratory ALE analyses to examine activations during cue and target periods. During cue periods, participants are presented with information about task rules for responding to stimuli and expected payoffs. This period thus allows for preparatory construction rule-outcome associations in service of proactive control engagement. During target periods, participants respond to stimuli and must mobilize appropriate viscero-motor states to facilitate faster and more accurate behaviors when a reward is on the line. This analysis allowed us to examine the extent to which cue and target periods rely on similar versus distinct brain systems.
Materials and Methods
Search strategy
We conducted a literature search through PubMed and Google Scholar to identify peer-reviewed neuroimaging studies that have investigated motivated cognitive control. We began by searching the key terms “fMRI” AND (“reward” OR “motivation”) AND (“cognitive control” OR “executive function” OR “working memory”). We then read the abstract of each paper to confirm or reject it as a candidate study for inclusion in the meta-analysis. We only focused on activations, because there are very few deactivations reported in the literature. Additionally, we focused on the effect of reward, because only a few studies have looked at the effect of punishment. To be included in the analysis, studies had to fulfill the following criteria: (i) employ fMRI and report resulting activation coordinates; (ii) include a cognitive control task (e.g., Stroop) with a manipulation of motivational incentive (i.e., reward versus no reward, or high versus low reward conditions); (iii) include healthy adult human participants; and (iv) report results from a whole-brain analysis. Several studies of motivated cognitive control employed ROI-based analyses and were not included in the meta-analysis, given that ALE requires whole-brain analyses to provide unbiased results. Sixteen studies were found that matched the inclusion criteria (Table 1). The presence of reward was associated with significantly improved behavioral performance (decreased reaction time and/or increased accuracy) in all but one of the sixteen studies.
Data extraction
From these sixteen studies, we collected data on sample size, task, type of contrast (e.g., main effect of reward during task, or reward x cognitive load interaction), task period (e.g., cue, delay, or target), and peak activation coordinates (Table 1). The meta-analysis included studies with different types of contrasts, but each examined the neural substrates that link motivational incentives to cognitive control. There were three categories of contrasts: (i) main effect of reward during a cognitive control task; (ii) conjunction effects showing overlapping activation in relation to cognitive demands and sensitivity to reward value; and (iii) interaction between cognitive control level and presence of incentive. While there are some differences in these three types of contrasts, all converge on related processes that support incentive-based modulation of cognitive control. It should be noted that we included results from the main effect of reward during task performance (e.g., during delay or target periods) but excluded results related to a main effect of reward during cue periods that only revealed the expected reward incentive, as this is likely to mainly capture reward processing alone, without an interaction with cognitive processes. If the cue period signaled motivational information and cognitive information (e.g., rules) that could be activated in a preparatory manner, then we included these foci. For studies that had multiple periods (e.g., delay, probe), we included foci from each period; however, if a given brain region was activated in multiple periods, it was only included once in the meta-analysis. Note that for the separate cue period and target period analyses, all available foci were used.
ALE meta-analytic data analysis
We analyzed the activation coordinates using a random-effects meta-analysis, activation likelihood estimation (ALE) (Eickhoff et al., 2012; Eickhoff et al., 2009; Laird et al., 2005; Turkeltaub et al., 2012) implemented with GingerALE 2.3.6 software (San Antonio, TX: UT Health Science Center Research Imaging Institute). This is the updated version of GingerALE that has fixed the error related to cluster-level FWE correction (Eickhoff et al., 2017).
Coordinates reported in Talairach space were first converted to MNI space using GingerALE’s foci converter function: Talairach to MNI (SPM). ALE models the uncertainty in localization of activation foci across studies using Gaussian probability density distributions. The voxel-wise union of these distributions yields the ALE value, a voxel-wise estimate of the likelihood of activation, given the input data. The algorithm aims at identifying significantly overlapping clusters of activation between studies. ALE treats activation foci from single studies as 3D Gaussian probability distributions to compensate for spatial uncertainty. The width of these distributions was statistically determined based on empirical data for between subject and between template variability (Eickhoff et al., 2009). Additionally, studies were weighted according to sample size, reflecting the idea that large sample sizes more likely reflect a true localization. This is implemented in terms of a widening Gaussian distribution with lower sample sizes and a smaller Gaussian distribution (and thus a stronger impact on ALE scores) with larger sample sizes (Eickhoff et al., 2009). Modeled activation maps for each study were generated by combining the probabilities of all activation foci for each voxel (Turkeltaub et al., 2012). These ALE scores were then compared to an ALE null distribution (Eickhoff et al., 2012) in which the same number of activation foci was randomly relocated and restricted by a gray matter probability map (Evans, Kamber, Collins, & MacDonald, 1994). Spatial associations between experiments were treated as random while the distribution of foci within an experiment was treated as fixed. Thereby random effects inference focuses on significant convergence of foci between studies rather than convergence within one study. The ALE scores from the actual meta-analysis were then tested against the ALE scores obtained under this null-distribution yielding a p-value based on the proportion of equal or higher random values. For the main ALE analysis, we used a cluster-forming threshold at the voxel level of p < 0.001, and a cluster-level threshold of p < 0.05 FWE corrected for multiple comparisons. We also ran separate analyses on foci from the cue period and foci from the target period. Given that fewer studies were included in each analysis, we used a more liberal p < .001, uncorrected threshold, with a minimum cluster size of 200 mm3. Results were visualized with MRIcron software (Rorden, Karnath, & Bonilha, 2007).
Neurosynth meta-analyses
To examine consistent recruitment related to motivated cognitive control using a more liberal and exploratory approach, we performed term-based forward inference meta-analyses using Neurosynth (Yarkoni et al., 2011). To perform such automated meta-analyses, Neurosynth divides the entire database of coordinates into two sets: those that occur in articles containing a particular term, and those that don't. A large-scale meta-analysis is then performed comparing the coordinates reported for studies with and without the term of interest. Forward inference maps reflect z-scores corresponding to the likelihood that each voxel will activate if a study uses a particular term (P(Activation|Term)), and are corrected for multiple comparisons using a false discovery rate (FDR) of q = .01. Here, we conducted forward inference meta-analyses using the terms “cognitive control” and “reward”, and looked for brain areas demonstrating overlapping recruitment across both domains.
Results
ALE meta-analysis: all foci
We first performed an analysis on all foci to identify regions that consistently demonstrate increased activation during cognitive control when rewards are available versus not available. The ALE analysis revealed four large clusters (Figures 1 and 2; Table 2). These right-lateralized regions included the inferior frontal sulcus (IFS) extending into the mid-dorsolateral prefrontal cortex (mid-DLPFC), mid-intraparietal sulcus (mid-IPS) extending into the anterior inferior parietal lobule (aIPL), anterior insula, and the anterior mid-cingulate cortex (aMCC) extending into the adjacent pre-supplementary motor area (pre-SMA).
Neurosynth meta-analyses
Although the strict inclusion criteria in the ALE analysis offers confidence that the identified regions do play a key role in motivated cognitive control, it is possible that this conservative analysis may overlook other relevant regions. Thus, as a complementary analysis, we used Neurosynth to identify regions that are consistently activated in studies that use the term “cognitive control” (N = 428 studies) and in studies that use the term “reward” (N = 671 studies). We focused on brain areas demonstrating overlapping recruitment across these domains, and may play a role in linking control demands to motivational outcomes. Notably, the regions demonstrating this pattern included all of the regions identified in the ALE meta-analysis (Figure 3). This analysis additionally identified homologous regions in the left hemisphere, as well as the bilateral inferior frontal junction/pre-motor cortex (IFJ/PMC), bilateral caudate nucleus extending into the nucleus accumbens (NAcc), bilateral medial thalamus, and bilateral hippocampus (Figure 3).
ALE meta-analyses: cue and target period foci
In our final analysis, we examined similarities and differences in neural recruitment during cue and target periods. Given that these analyses were based on a limited number of studies and a more lenient statistical threshold, these results should be viewed as exploratory. The brain regions that were consistently recruited during the cue period, and may contribute to the proactive engagement of value-modulated control signals, included the right IFJ/PMC, left ventral IPS, bilateral caudate, right dorsal posterior cingulate cortex (PCC), right midbrain near the ventral tegmental area (VTA), and left medial thalamus (Figure 4). On the other hand, the brain regions that were consistently recruited during the target period, and may contribute to the mobilization of viscero-motor processes during action selection, included the right anterior insula, right aMCC, right IPS/aIPL, right medial thalamus, left ventral IPS, and left IFJ (Figure 4). The only region common to both trial events was the left ventral IPS, suggesting that value-based modulation of control processes during cue and target periods may rely on largely distinct neural systems. However, this is a tentative conclusion, tempered by the low power of these analyses.
Discussion
Cognitive control is often enhanced when reward incentives are contingent on performance. This enhancement manifests as faster and more accurate responses, and is often accompanied by elevated brain activation in numerous cortical regions. Here, we sought to characterize the brain regions that reliably demonstrate this pattern and may support incentive-related behavioral improvements in cognitive control. Using quantitative ALE and Neurosynth meta-analyses, we identified a select constellation of multimodal association cortices and subcortical regions known to play key roles in motivational processing. An exploratory analysis also revealed differences in recruitment during cue versus target periods, suggesting partially distinct systems may underlie the proactive engagement of control versus the mobilization of viscero-motor states that support action execution.
Several regions were implicated in both the ALE and Neurosynth analyses including the inferior frontal sulcus (IFS), intraparietal sulcus (IPS)/anterior inferior parietal lobule (aIPL), anterior mid-cingulate cortex (aMCC)/pre-supplementary motor area (pre-SMA), and anterior insula. The fact that similar results were obtained with different analysis criteria provides strong evidence that these regions are centrally involved in value-based modulation of cognitive control. Interestingly, the Neurosynth analysis revealed that only a subset of regions engaged during cognitive control are also engaged during reward processing. For example, the posterior middle temporal gyrus and parts of the lateral prefrontal and parietal cortices only demonstrated consistent recruitment during cognitive control. This suggests that there may be a select group of regions including the IFS, IPS/aIPL, aMCC/pre-SMA, and insula that integrate control demands and expected outcomes.
The aforementioned regions have well-established roles in supporting cognitive control and adaptive behavior via top-down modulation of sensory and motor processing (Cole et al., 2014; Dosenbach et al., 2006; Duncan, 2010; Miller & Cohen, 2001). The IFS and IPS/aIPL are part of the frontoparietal control network (FPCN) (Dixon, Girn, et al., 2017; Power et al., 2011; Spreng et al., 2010; Vincent et al., 2008; Yeo et al., 2011) and contribute to working memory and the flexible representation of task rules (Badre & D'Esposito, 2009; Brass, Derrfuss, Forstmann, & von Cramon, 2005; Bunge, 2004; De Baene, Kuhn, & Brass, 2011; Derrfuss, Brass, Neumann, & von Cramon, 2005; Dixon & Christoff, 2012; Dumontheil, Thompson, & Duncan, 2011; Koechlin et al., 2003; Wallis, Anderson, & Miller, 2001). Neurons in these regions exhibit dynamic coding properties, signaling any currently relevant information (Duncan, 2010; Stokes et al., 2013), and rapidly updating their pattern of global functional connectivity according to task demands (Cole et al., 2013; Fornito, Harrison, Zalesky, & Simons, 2012; Gao & Lin, 2012; Spreng et al., 2010). One possibility is that elevated activation during motivated cognitive control reflects an amplification and sharpening of task information (e.g., rules) as a result of modulatory inputs from reward processing regions (Cohen et al., 2002; Etzel et al., 2015; Histed et al., 2009; Kouneiher et al., 2009). It could also reflect a shift in the temporal dynamics of cognitive control, towards a proactive mode of control (Braver, 2012; Jimura et al., 2010). When performance needs to be fast and accurate in order to procure a reward, FPCN regions exhibit greater sustained activation and reduced transient/reactive activation, ostensibly reflecting the active maintenance of task rules across trials (Braver, 2012; Jimura et al., 2010).
Several lines of evidence suggest that FPCN regions may play an integrative role, directly representing control demands in relation to expected outcomes. First, Dixon & Christoff (2012) found that the FPCN flexibly represented trial-to-trial shifts in the association between specific task rules and expected reward outcomes (Dixon & Christoff, 2012). This finding is consistent with the fact that FPCN neurons encode not only rule information, but also experienced and expected reward and punishment (Matsumoto, Suzuki, & Tanaka, 2003; Pan et al., 2008; Wallis & Miller, 2003; Watanabe, Hikosaka, Sakagami, & Shirakawa, 2002)(Abe & Lee, 2011; Asaad & Eskandar, 2011; Hikosaka & Watanabe, 2000; Histed et al., 2009; Hosokawa & Watanabe, 2012; Kennerley & Wallis, 2009; Kim, Hwang, & Lee, 2008; Klein, Deaner, & Platt, 2008; Kobayashi et al., 2006; Platt & Glimcher, 1999; Seo, Barraclough, & Lee, 2007; Watanabe, 1996; Watanabe et al., 2002). Second, McGuire and Botvinick (2010) found that the lateral PFC signaled the cost of exerting cognitive effort, suggesting that the FPCN plays a role in linking control demands to other parameters that are important for deciding when to implement control. In fact, numerous studies have demonstrated FPCN activation during value-based decision making (Christopoulos et al., 2009; Diekhof & Gruber, 2010; Gianotti et al., 2009; Huettel, Song, & McCarthy, 2005; Hutcherson, Plassmann, Gross, & Rangel, 2012; Jimura et al., 2013; Jimura & Poldrack, 2012; Lebreton et al., 2013; McClure et al., 2004; Plassmann, O'Doherty, & Rangel, 2007; Plassmann, O'Doherty, & Rangel, 2010; Tanaka et al., 2004; Tobler et al., 2009; Vickery, Chun, & Lee, 2011; Weber & Huettel, 2008). Third, several studies have shown an interaction between control level and reward expectancy in the FPCN (Bahlmann et al., 2015; Ivanov et al., 2012; Padmala & Pessoa, 2011). Finally, disruption of the FPCN—via transcranial magnetic stimulation or due to a lesion—disrupts value processing and leads to altered motivation (Camus et al., 2009; Essex, Clinton, Wonderley, & Zald, 2012; Paradiso et al., 1999; Zamboni et al., 2008). Together, these findings suggest that the FPCN may play an integrative role, serving as a bridge between control demands and motivational outcomes (Dixon & Christoff, 2014; Dixon, Thiruchselvam, et al., 2017; Pessoa, 2008; Watanabe, 2017; Watanabe & Sakagami, 2007).
Our meta-analytic results also revealed that the anterior insula and anterior mid-cingulate cortex play key roles in motivated cognitive control. These regions are part of the “salience network” (Menon & Uddin, 2010; Seeley et al., 2007). The insula has a well-established role in interoception—the representation of internal bodily signals including pain, temperature, respiratory and cardiac sensations (Craig, 2002; Critchley & Harrison, 2013; Critchley et al., 2004; Farb, Segal, & Anderson, 2012). This region is also activated during a variety of goal-directed tasks (Dixon et al., 2014a; Dosenbach et al., 2006; Duncan, 2010; Farb et al., 2012), suggesting that it may serve as a nexus between the FPCN and other interoceptive regions, allowing viscero-somatic signals to become integrated with task goals (Dixon et al., 2014a; Farb et al., 2012; Jezzini et al., 2012). It has been suggested that the aMCC plays a role in determining the value of implementing control (Shenhav et al., 2013). An alternative perspective is that this region plays a more specific role in liking reinforcement to different actions (Camille, Tsuchida, & Fellows, 2011; Dixon, Thiruchselvam, et al., 2017; Rushworth, Behrens, Rudebeck, & Walton, 2007). This region is well connected to the motor system (Morecraft & Tanji, 2009), is sensitive to the effort costs of actions (Croxson et al., 2009; Kurniawan, Guitart-Masip, Dayan, & Dolan, 2013; Shidara & Richmond, 2002), and encodes action-outcome associations (Alexander & Brown, 2011; Procyk et al., 2014; Rushworth et al., 2007; Shackman et al., 2011). Unlike the IFS and IPS, the aMCC does not encode rule-outcome associations (Dixon & Christoff, 2012). Rather, during motivated cognitive control, the insula and aMCC may play a role in translating rule-outcome associations represented by the FPCN into concrete viscero-motor body states that drive optimal behavior in service of acquiring a reward or avoiding punishment (Dixon et al., 2014a; Knutson & Greer, 2008; Medford & Critchley, 2010; Rushworth et al., 2011; Shima & Tanji, 1998). Consistent with this idea, we found that the anterior insula and aMCC were specifically recruited during the target rather than cue period. The functions of these regions may facilitate the maintenance of effort prior to and during action execution (Croxson et al., 2009; Parvizi et al., 2013; Shidara & Richmond, 2002).
The Neurosynth analysis also highlighted the caudate nucleus extending into the NAcc, while the cue period ALE analysis highlighted the caudate and the midbrain near the VTA. These regions are part of a dopaminergic midbrain-striatal circuit that signals prediction errors when there is a discrepancy between expected and actual rewards (Hare et al., 2008; Montague, Dayan, & Sejnowski, 1996; O'Doherty et al., 2004; Schultz, 1997). Moreover, neurons in this circuit gradually shift the timing of maximal firing from actual outcomes to reward-predictive cues (Schultz, 1997). These regions play a fundamental role in goal-directed behavior via biasing action selection based on the anticipation of imminent rewards and the opportunity to exercise choice (Knutson, Adams, Fong, & Hommer, 2001; Leotti & Delgado, 2014). Accordingly, this circuit may play a role in broadcasting predicted cue values to other systems involved in constructing rule-outcome associations, and modulating viscero-motor processing. Indeed, prior work has outlined detailed models of how the dopaminergic midbrain-striatal circuit serves a gating function that strengthens or destabilizes current working memory contents depending on task demands (Cohen et al., 2002; Cools, 2016; Hazy, Frank, & O'Reilly, 2006). Specifically, tonic dopamine in the PFC is thought to enhance the stability of working memory content via increased signal to noise ratio (that is, boosting the strength of local recurrent activity versus stimulus-evoked activity). On the other hand, phasic dopamine is thought to serve as a gating signal, allowing working memory to be updated based on reward-predicting events (Cohen et al., 2002; Cools, 2016; Hazy et al., 2006).
A few limitations of the current findings should be noted. Our analysis was based on studies that employed a number of different cognitive control tasks. One the one-hand, this suggests that the identified brain regions support motivated cognitive control in general and are not tied to any particular task. On the other hand, this may obscure the delineation of neural systems that may link expected outcomes to different types of executive control. As more studies examine this topic, future work may be able to discern whether incentive effects on different aspects of cognitive control (e.g., response inhibition versus working memory updating) have similar or distinct neural substrates. We were also unable to examine the effect of punishment on cognitive control given the small number of fMRI studies on this topic. Given that the observed frontoparietal regions have been shown to encode information about aversive outcomes in addition to rewarding outcomes (Asaad & Eskandar, 2011; Kobayashi et al., 2006), it is possible that substantial overlap with the current findings would be observed. However, there is some indication in prior work that differences may also appear (Paschke et al., 2015). Future studies may also be able to provide a more in-depth analysis of brain regions showing incentive effects during specific trial periods (e.g., cue versus delay and target processing). Our results were based on a small number of studies and should be seen as preliminary. Given that we found evidence of distinct brain regions involved in cue versus target periods, this may be a key area for future inquiry to investigate. Another important dimension of motivated cognitive control is incentive type (i.e., primary versus secondary). However, all studies included in this review operationalized motivation with monetary (i.e., secondary) incentives with the exception of Beck et al. (2010). This study compared the effects of primary (juice) and secondary (money) rewards on performance in a Sternberg task. The authors found no significant differences in behavioral improvement between the reward types, but did find both regional and temporal differences in brain activation patterns. This underscores the importance of studying the different types of incentive effects separately.
To summarize, the current findings reveal a select constellation of brain regions that are consistently recruited in studies of motivated cognitive control. Flexible interactions between frontoparietal, salience, and dopaminergic midbrain-striatal networks may underlie the dynamic process by which control signals are precisely tailored based on expected outcomes.