Skip to main content

Pulling the lever in a hurry: the influence of impulsivity and sensitivity to reward on moral decision-making under time pressure



Making timely moral decisions can save a life. However, literature on how moral decisions are made under time pressure reports conflicting results. Moreover, it is unclear whether and how moral choices under time pressure may be influenced by personality traits like impulsivity and sensitivity to reward and punishment.


To address these gaps, in this study we employed a moral dilemma task, manipulating decision time between participants: one group (N = 25) was subjected to time pressure (TP), with 8 s maximum time for response (including the reading time), the other (N = 28) was left free to take all the time to respond (noTP). We measured type of choice (utilitarian vs. non-utilitarian), decision times, self-reported unpleasantness and arousal during decision-making, and participants’ impulsivity and BIS-BAS sensitivity.


We found no group effect on the type of choice, suggesting that time pressure per se did not influence moral decisions. However, impulsivity affected the impact of time pressure, in that individuals with higher cognitive instability showed slower response times under no time constraint. In addition, higher sensitivity to reward predicted a higher proportion of utilitarian choices regardless of the time available for decision.


Results are discussed within the dual-process theory of moral judgement, revealing that the impact of time pressure on moral decision-making might be more complex and multifaceted than expected, potentially interacting with a specific facet of attentional impulsivity.

Peer Review reports


Making timely moral decisions is a real challenge, as emerged during the COVID-19 pandemic where physicians and nurses were forced to quickly choose which patients to treat first under limited healthcare resources.

Sacrificial moral dilemmas are reliable experimental probes to study the contribution of cognitive and emotional processes to moral decision-making [1]. In these studies, participants are confronted with life-and-death hypothetical scenarios where they have to decide whether to endorse or reject the utilitarian choice of killing one person to save more lives. In the classic Trolley dilemma, the utilitarian option requires pulling a lever to redirect a runaway trolley, which would kill five workmen, onto a sidetrack where it will kill only one person; in the Footbridge version, it requires pushing one large man off an overpass onto the tracks to stop the runaway trolley. Research consistently showed that most people respectively endorse and reject the utilitarian resolution in trolley- and footbridge-like dilemmas, despite the identical cost-benefit trade-off [1,2,3].

According to the dual-process model of moral judgement [1], responses to moral dilemmas are driven by the outcomes of a competition between cognitive and emotional processes. In the Footbridge case, a strong emotional aversive reaction to causing harm to one person overrides a cognitive-based analysis of saving more lives, driving toward the rejection of the utilitarian resolution because harming someone is perceived as an intended means to an end. Instead, in the Trolley case, a lower emotional engagement allows the deliberate cost-benefit reasoning to prevail and drive toward the utilitarian choice since harming someone is perceived as an unintended side effect. Therefore, dilemma resolutions vary depending on how much each dilemma type elicits aversive emotions, so that the more emotional processes are engaged the higher the likelihood of rejecting utilitarian choices. Unsurprisingly, in scenarios where the decision-maker’s own life is at stake (“personal involvement”), this pattern reverses, so that a strong negative emotional reaction to self-sacrifice pushes towards utilitarian, self-protective behaviour [4].

Time is a key feature of high-stakes human choices. Time pressure alters decision-making by increasing reliance on emotional states [5]. Previous research in moral decision-making has demonstrated that time pressure affects the outcomes and the processes involved in moral judgement, as it is assumed to reduce the time for the cost-benefit calculation letting emotional processes prevail. This led to a reduced proportion of utilitarian choices [6,7,8,9,10], and a decreased willingness to self-sacrifice in dilemmas with personal involvement [11]. However, evidence remains mixed, with some studies suggesting that reduced decision times are associated with a higher proportion of utilitarian choices [12, 13], and other studies finding null results [14]. Moreover, very few studies have investigated if these phenomena are influenced by personality traits known to affect how people make decisions. Among these, impulsivity and motivational drives towards action/inhibition seem particularly relevant.

Impulsivity involves multiple cognitive and behavioural domains (e.g., inability to reflect on choices’ outcomes, to defer rewards, and to inhibit prepotent responses; [15]) that are strongly involved in decision-making. Beyond research on psychopathy, studies investigating the role of impulsivity in moral dilemmas are surprisingly scarce. Within moral judgments, higher impulsivity should reduce the engagement of deliberative processes, thereby allowing emotional processes to prevail. Nonetheless, previous studies measuring impulsivity in moral judgement tasks have found no effects of impulsivity on the type of resolutions taken [16,17,18], and to our knowledge, no study has manipulated decision times.

Motivational drives towards action/inhibition, namely the Behavioural Inhibition and Activation Systems (BIS/BAS), are worthy of investigation, too. Indeed, the BIS is sensitive to signals of punishment, inhibiting behaviours leading to negative outcomes or potential harm; whereas the BAS is sensitive to reward, driving to behaviours resulting in positive outcomes [19]. Within moral dilemmas, “reward” corresponds to the maximisation of lives saved, thereby driving towards utilitarian resolutions. Consistently, previous research [20] showed that higher BAS individuals tended to make an overall higher number of utilitarian choices, while higher BIS participants tended to reject utilitarian resolutions, particularly in footbridge-like dilemmas. Notably, without time constraints no effects of BIS-BAS emerged on response times.

In summary, there is strong evidence that cognitive-emotional conflict drives moral decisions in sacrificial dilemmas and that reducing decision time can further affect moral choices. However, the direction of this effect is still unclear, as well as if impulsivity and BIS-BAS sensitivity might influence these processes. To address these gaps, in our study, we used a standardised set of moral scenarios to investigate the effect of impulsivity and BIS-BAS sensitivity on moral decision-making under time pressure. We manipulated decision time between participants, as it has been successfully done by the majority of studies manipulating time pressure in a moral dilemma task (e.g [6,7,8,9]). A within-subjects design, conversely, may not have been appropriate because it could have generated a sequential effect in the responses (cf [21, 22]): the speeding effect of the time pressure condition could have extended to the condition with no time pressure, potentially undermining the effectiveness of the manipulation. Moreover, we measured impulsivity and BIS/BAS sensitivity as well as self-reported valence and arousal experienced during decision-making. Consistently with the dual-process model, in the time pressure group, we expected to find faster response times, higher arousal and unpleasantness ratings, and lower proportions of utilitarian choices [6,7,8,9,10]. As for the effect of impulsivity and BIS-BAS sensitivity, the literature is less conclusive in guiding stringent confirmatory hypotheses. Within the dual-process framework, we might hypothesise that individuals with higher impulsivity would exhibit a greater tendency towards emotionally-driven responses, particularly under time pressure. Time constraints might hinder a careful evaluation of different options and decision outcomes by increasing emotional activation or depleting the cognitive resources available for decision-making. This might lead to a lower endorsement of utilitarian choices and/or to an increase in self-protective behaviours in dilemmas involving personal involvement. In line with [20] Moore et al. (2011), individuals with higher BAS sensitivity might show an overall propensity towards utilitarian resolutions, while BIS-reactive individuals might show the opposite, and this trend should be reversed in dilemmas with personal involvement.



Sixty healthy university students (37 F) were recruited to voluntarily participate in the study. They had no history of psychiatric or neurological disorders, nor prior knowledge of moral dilemmas. The sample size was based on previous studies manipulating time pressure in moral dilemma tasks [6, 8], and allowed to reach a 96% post-hoc power (α = 0.05, f = 0.50).

Participants were randomly assigned to either the time pressure (TP, N = 30) or no time pressure (noTP, N = 30) group. Data from 6 participants were discarded because of deviations from instructions during data collection (e.g., reversed response scales, not keeping the fingers on the computer keys during the task). Data from 1 participant was discarded according to the a-priori criterion of missing responses in more than 20% of the trials. The final sample included 53 participants (TP group = 25, F = 15, age M = 22 years, SD = 1.55 years, range = 20–25; noTP group = 28, F = 17, age M = 21.9 years, SD = 1.77 years, range = 19–25).

All participants gave written consent before participation. The study was submitted and approved by the Ethical Committee for the Psychological Research of the University of Padua (protocol n. 2105) and conducted in accordance with the Declaration of Helsinki.

Stimulus material

A set of 75 moral dilemmas [4] was administered to each participant. This consisted of 60 experimental dilemmas and 15 filler dilemmas. Experimental dilemmas included 30 trolley- and 30 footbridge-like dilemmas, of which 15 with personal involvement and 15 without personal involvement. Filler dilemmas were similar to experimental dilemmas but described non-deathly moral issues (e.g., stealing, lying, being dishonest), and were included to avoid automaticity in responding due to habituation to deathly scenarios. This condition was not analysed and will not be discussed further here.

Dilemmas were presented randomly within 3 blocks of 25 trials each (10 footbridge-like, 10 trolley-like, and 5 filler dilemmas). Each dilemma was presented as text, in white type against a grey background, through a series of two screens. The first described the scenario, in which some threat is going to cause the death of a group of people; the second described the hypothetical action (utilitarian option, namely saving more lives), in which the agent kills one individual to save the group of people. Participants had to choose whether or not enacting this behaviour by pressing the corresponding key on the computer keyboard.

Stimuli were presented on a 19-inch computer screen at 100 cm distance. Stimuli were presented with E-prime software [23].


Upon arrival, participants were given information about the experiment, and they signed the informed consent. Then, they were asked to fill out the State-Trait Anxiety Inventory (STAI Form Y-2) [24] and the Beck Depression Inventory (BDI-II) [25]. Since anxiety and depression interact with emotional reactivity and with decision-making under time pressure [26], we decided to measure (and control for them) in our experiment.

Afterwards, participants sat in a sound-attenuated room where instructions for the task were given. Specifically, they were asked to identify with the main character of the scenarios. Each trial began with the scenario, that participants could read at their own pace. After pressing the spacebar, the utilitarian option was presented for a maximum of 8 s in the TP group and for an unlimited time in the noTP group. Participants were asked to read the proposed action and decide whether to choose it or not by pressing one of two computer keys marked as “YES” or “NO”.

In the TP group participants had a limited time to respond, as indicated by a white bar located on the upper side of the screen above the text, decreasing in size every second and disappearing when time ran out. Instructions stressed to respond within the limited time indicated by the bar. If participants failed to respond within the allotted time the next scenario would appear. In the noTP group participants were instructed to respond when they reached a decision, having as much time as they wanted to decide. In both groups, response times were recorded from the onset of the utilitarian option on the screen.

After their response, participants were required to rate how they felt while they were deciding using a computerised version of the Self-Assessment Manikin (SAM) [27], displaying the 9-point scales of valence (unpleasantness/pleasantness) and arousal (calm/activation), with higher scores indicating higher pleasantness and higher arousal. Then, the next scenario was presented. After each block of trials, participants could take a break to avoid fatigue. Before starting the experimental session, each participant familiarised with the task through two practice trials to check that they understood the instructions properly. After the experimental session, participants were asked to fill out the Barratt Impulsiveness Scale (BIS-11) [28] and the BIS-BAS Scales [29].

The BIS-11 is a 30-item self-report questionnaire measuring impulsivity. It is rated on a 4-point Likert scale from 1 = rarely/never to 4 = almost always/always. The total scores range from 30 to 120, with higher total scores reflecting higher levels of impulsivity. The BIS-11 comprises six first-order subscales of attention (e.g., “focusing on the task at hand”), motor impulsiveness (e.g., “acting on the spur of the moment”), self-control (e.g., “planning and thinking carefully”), cognitive complexity (e.g., “enjoy challenging mental tasks”), perseverance (e.g., “a consistent lifestyle”), and cognitive instability (e.g., “thought insertions and racing thoughts”). These fell under three second-order subscales: attentional impulsiveness (attention and cognitive instability), motor impulsiveness (motor impulsiveness and perseverance), and non-planning impulsiveness (self-control and cognitive complexity).

The BIS-BAS scales are a self-report measure of BIS-BAS sensitivity containing a 7-item BIS subscale and a 13-item BAS factor comprising 3 subscales. It is rated on a 5-point Likert scale from 1 = “does not describe me at all” to 5 = “describes me completely”, with higher scores indicating higher BIS-BAS sensitivity. The BIS subscale includes items regarding reactions to the anticipation of punishment. The BAS factor assesses how people respond to potentially rewarding events and comprises three subscales: Reward Responsiveness (5 items regarding the positive responses to anticipated or actual reward), Drive (4 items pertaining to pursuing desired goals), and Fun Seeking (4 items referring to desiring new rewards and willing to approach a current potentially rewarding event).

Data analysis

The study has a 2 (group, between-subjects: TP vs. noTP) x 2 (dilemma type, within-subjects: trolley-like vs. footbridge-like) x 2 (personal involvement, within-subjects: no involvement vs. involvement) mixed design. We measured as dependent variables (DVs): type of choice (utilitarian vs. non-utilitarian), choice response times (in msec), valence, and arousal ratings. We chose not to include the type of choice (utilitarian vs. non-utilitarian) as an additional fixed factor in our analysis, although we acknowledge that it may be a factor of interest, because in our sample the number of trials where participants opted for utilitarian resolutions was not comparable to the number of trials where participants rejected utilitarian resolutions within each dilemma type. Thus, a statistical comparison between the two types of choice would have been unreliable. However, for the sake of completeness, we provide in the Supplementary Material descriptive statistics (Table S1) and plots (Figure S1) regarding choice response times, valence and arousal ratings as a function of group (TP vs. noTP), dilemma type (trolley- vs. footbridge-like) and type of choice (utilitarian vs. non-utilitarian).

Data were pre-processed according to the following a-priori criteria: trials with missing values and with response times ≤ 150 msec were discarded ( 23%), response times were log-transformed to account for their skewed distribution [30], and questionnaire scores were mean-centred.

Analyses were performed using R software. Outliers were detected through median absolute deviation values (MAD > 3) computed on choice, choice response times, valence, and arousal ratings. We identified 6 univariate outliers. However, visual inspection of their ratings showed that they were characterised only by slightly different values than other participants. Since none of them significantly impacted the models’ estimates (as assessed through Cook’s distance, see below), we decided to keep them in data analysis. Data from 53 participants entered data analysis.

For each DV we fitted a (Generalised) Linear Mixed-effects Model ((G)LMM) with individual random intercept, group, dilemma type, personal involvement, and their interaction as fixed factors. We used a binomial family for the GLMM on choice and a Gaussian family for the LMMs on the remaining DVs. BIS-11 and BIS-BAS scores were added as covariates in separate models, controlling for STAI and BDI-II scores in each model. When a significant effect of a questionnaire predictor was found, additional models testing the slopes of questionnaire trends for each level of the fixed factors (group and dilemma type) were performed.

Influential cases (N = 0) were evaluated through Cook’s distance (> 1). GLMMs effects were tested through Type II Analysis of Deviance, while LMMs effects were tested by means of F-test and p-values calculated via Satterthwaite’s degrees of freedom method (α = 0.05). All post-hoc pairwise comparisons were tested through estimated marginal means or trends contrasts, adjusted for multiple comparisons with the False Discovery Rate (FDR) method. For each model, in the Supplementary Material Tables S2-S5 we report the estimated parameters with 95% CI, marginal, and conditional R2.


Descriptive statistics

Descriptive statistics are summarised in Table 1.

Table 1 Mean (M) and standard deviation (SD) of the main research variables

Proportion of utilitarian choices

The model on choices (R2 marginal = 0.277; R2 conditional = 0.506; Fig. 1A; Supplementary Material Table S2) did not show significant group effects (χ2 (1) = 1.05, p = .777). A main effect of dilemma type2 (1) = 647.064, p < .001) was observed, with trolley-like dilemmas eliciting a higher proportion of utilitarian choices than footbridge-like dilemmas (trolley vs. footbridge: 2.68, SE = 0.104, z = 25.63, p < .001). We also found a main effect of involvement2 (1) = 9.36, p = .002), better specified by a significant dilemma type × involvement interaction (χ2 (1) = 10.13, p = .002; Fig. 1B). Dilemmas with personal involvement elicited a higher proportion of utilitarian choices than dilemmas without personal involvement only in footbridge-like dilemmas (trolley no involvement vs. involvement: -0.026, SE = 0.126, z = -0.207, p = .836; footbridge no involvement vs. involvement: -0.610, SE = 0.138, z = -4.424, p < .001). Lastly, we found significant effects of BIS-BAS scores: regardless of group and dilemma type, and controlling for STAI and BDI-II scores, higher Reward Responsiveness subscale scores predicted a higher proportion of utilitarian choices (R2 marginal = 0.310; R2 conditional = 0.497; χ2 (1) = 9.35, p = .002; β = 0.181, SE = 0.06, z = 3.27, p = .001; Fig. 1C).

Fig. 1
figure 1

(A) Utilitarian choices as a function of the Dilemma type. (B) Utilitarian choices as a function of Personal Involvement. (C) Relation between utilitarian choices and BIS-BAS Reward Responsiveness subscale scores in the whole sample. Error bars (and grey area) represent standard errors of the means

Choice response times

The model on choice response times (R2 marginal = 0.250; R2 conditional = 0.588; Fig. 2A; Supplementary Material Table S3) showed significant main effects of group (F(1, 51) = 32.53, p < .001) and dilemma type (F(1, 3038) = 270.41, p < .001). Response times were faster in the TP than in the noTP group (noTP vs. TP: 0.463 in log scale and 4266 in msec, SE = 0.081, t(51) = 5.7, p < .001), and in footbridge- than trolley-like dilemmas (trolley vs. footbridge: 0.191 in log scale and 1169 in msec, SE = 0.012, t(3038) = 16.44, p < .001). We also found a main effect of involvement (F(1, 3038) = 4.56, p = .033), better specified by a significant dilemma type × involvement interaction (F(1, 3038) = 4.66, p = .031; Fig. 2B). Dilemmas with personal involvement elicited faster response times than dilemmas without personal involvement only in footbridge-like dilemmas (trolley no involvement vs. involvement: 0.00 in log scale and 26 in msec, SE = 0.016, t(3038) = -0.017, p = .987; footbridge no involvement vs. involvement: 0.05 in log scale and 210 in msec, SE = 0.016, t(3038) = 3.039, p = .002).

We also found an effect of BIS-11 Cognitive Instability score, that remained significant controlling for STAI and BDI-II scores. From the model (R2 marginal = 0.298; R2 conditional = 0.605) testing the slopes of BIS-11 Cognitive Instability trend for each level of the fixed factors, a significant interaction emerged between BIS-11 Cognitive Instability scores, group, and dilemma type (F(1, 3039) = 4.40, p = .036; Fig. 2C). The slope analysis showed that higher BIS-11 Cognitive Instability scores predicted slower response times in the noTP group to both dilemma types (trolley: β = 0.062, SE = 0.030, CI = [0.001, 0.122]; footbridge: β = 0.077, SE = 0.030, CI = [0.016, 0.138]), whereas slopes in the TP group were not statistically different from 0 (trolley: β = -0.016, SE = 0.034, CI = [-0.084, 0.051]; footbridge: β = -0.034, SE = 0.034, CI = [-0.102, 0.034]).

Fig. 2
figure 2

(A) Choice response times as a function of the Dilemma type. (B) Choice response times as a function of the Dilemma type and Personal Involvement. (C) Relation between choice response times and BIS-11 Cognitive Instability scores as a function of Dilemma type. Error bars (and shaded areas) represent standard errors of the means. TP: Time Pressure group. noTP: no Time Pressure group

Valence ratings

The model on valence ratings (R2 marginal = 0.006; R2 conditional = 0.485; Fig. 3; Supplementary Material Table S4) showed only a significant main effect of dilemma type (F(1, 3038) = 10.402, p = .001), with trolley-like dilemmas eliciting higher unpleasantness than footbridge-like dilemmas (trolley vs. footbridge: -0.148, SE = 0.046, t(3042) = -3.22, p = .001). Neither personal involvement nor questionnaire scores showed any significant effects.

Fig. 3
figure 3

Valence ratings as a function of the Dilemma type. Error bars represent standard errors of the means

Arousal ratings

The model on arousal ratings (R2 marginal = 0.004; R2 conditional = 0.540; Fig. 4A; Supplementary Material Table S5) showed a main effect of dilemma type (F(1, 3038) = 4.38, p = .036): trolley-like dilemmas elicited higher arousal ratings than footbridge-like dilemmas (trolley vs. footbridge: 0.111, SE = 0.054, t(3038) = 2.09, p = .036). A main effect of involvement (F(1, 3038) = 20.161, p < .001) also emerged, with dilemmas with personal involvement eliciting higher arousal ratings than dilemmas without personal involvement (no involvement vs. involvement: -0.24, SE = 0.053, t(3038) = -4.49, p < .001). No questionnaire scores significantly modulated arousal ratings.

Fig. 4
figure 4

(A) Arousal ratings as a function of the Dilemma type. (B) Arousal ratings as a function of Personal Involvement. Error bars represent standard errors of the means

Discussion and conclusions

Making timely moral decisions can be crucial in saving lives (e.g., physicians and nurses during surgeries, airline pilots during turbulent flights). However, little is known about the processes underlying moral decision-making under time pressure, and their interaction with individual differences in impulsivity or sensitivity to reward and punishment. With this study, we aimed to cover these gaps by investigating the influence of these trait dimensions on moral decision-making under time pressure.

In line with the dual-process model [1], we found that trolley-like dilemmas elicited a higher proportion of utilitarian choices and slower response times, suggesting that rational cost-benefit analysis required additional time and cognitive effort. Moreover, contrary to the dual-process framework, but consistent with prior work using the present dilemma set [31], higher unpleasantness and arousal were reported in trolley-like dilemmas. We can interpret this result as due to the higher proportion of utilitarian choices in trolley-like dilemmas. Indeed, from the qualitative analysis of descriptive statistics about valence and arousal ratings as a function of the type of choice (see Supplementary Material Table S1 and Figure S1), it seems that in both groups and dilemma types higher unpleasantness is related to a higher proportion of utilitarian choices. This suggests that sacrificing one person, even when perceived as a side effect of maximising the number of lives saved, still carries an ongoing emotional cost. Given that utilitarian choices are more numerous in trolley-like dilemmas, we can reasonably speculate that the higher unpleasantness and arousal ratings found in trolley-like dilemmas are due to the higher number of choices in which participants faced the emotional cost of utilitarian resolutions. Consistent with the dual-process model, we also found that dilemmas with personal involvement (especially the footbridge-like ones) elicited a higher proportion of utilitarian choices, faster response times, and heightened arousal. In these dilemmas, where the utilitarian option implies saving one’s own life, greater emotional engagement results in clear-cut and prompt utilitarian decisions.

However, in contrast with our hypothesis, time pressure per se did not affect moral decisions or emotional experience, as evidenced by the lack of a group effect on type of choice, valence, and arousal ratings. In particular, time pressure did not induce a heightened state of arousal, as might be expected (e.g [32]). We might speculate that emotional arousal was primarily influenced by the task of resolving dilemmas. The strong emotional engagement elicited during dilemma resolutions may have limited the impact of time pressure on subjective arousal, such that the additional stress of time constraints did not lead to a significant incremental effect. Indeed, arousal ratings were consistently high (i.e., > 6.2) across all types of dilemmas and involvement conditions in both groups.

Overall, we can reasonably rule out that these results are due to a failure in our experimental manipulation to induce time pressure. Indeed, the TP group showed faster decision times than the noTP group. It is possible that the 8-sec constraint we employed was not stringent enough to affect the type of choice (cf. 4.4-sec and 1-sec in [6, 10]). However, as in [8], our 8-sec time constraint included the reading time for the utilitarian option ( 6.5 s, see [3]), and the decision time was constant across dilemmas since number of words and text characters of utilitarian options was fully balanced throughout (see [4]). This meant that participants actually had only 1.5 s, on average, to make a decision. Still, it could also be conceivable that such a duration was inadequate to induce either a heightened state of arousal (as noted above) or a significant reduction in cognitive resources available for engaging controlled processes during decision-making. Indeed, previous research on moral dilemmas [33] has demonstrated that a moderate cognitive load induced by a secondary task (i.e., a concurrent digit-search task) increased response times for utilitarian judgments, while not affecting the type of judgement. Interestingly, a higher degree of cognitive load (i.e., performing an extremely difficult dot memory task) was found to be effective in reducing the number of utilitarian responses in high-conflict moral dilemmas [34]. These findings suggest that stronger experimental manipulations are needed to impact effortful cognitive processing during the resolution of moral dilemmas, either by employing a higher cognitive load or imposing stricter temporal constraints.

Nonetheless, in our view, what may explain our unexpected result of a null group effect lies mainly in the moral task employed. While previous studies [6,7,8,9,10, 12] used a moral acceptability question format, thus measuring moral judgments, we asked participants whether they would actually perform the proposed action. Prior research highlighted a dissociation between moral judgement and choice of action, so that the latter is more closely tied to emotional experience and personal responsibility, whereas judgement mainly relies on cognitive perspective-taking [35]. It is thus plausible that time pressure interferes more with moral judgement, which additionally requires shifting from a first- to a third-person perspective [36].

However, in our study time pressure indirectly influenced moral choices, as demonstrated by the interaction effects on a specific facet of attentional impulsivity. Higher scores of cognitive instability, as indexed by the BIS-11 subscale, predicted slower response times in the noTP group. Although such a result may seem counterintuitive and is in contrast with our hypothesis, previous research showed that impulsive individuals tend to be slower in choice reaction time tasks [37] and Go/NoGo tasks [38], especially when information-processing demands and response complexity are increased [39], while other studies (e.g [40]), reported an increase in time taken to resolve interference. Cognitive instability involves intrusive thoughts and rapid shifts in attention and thinking, which can lead to difficulties in maintaining a consistent approach to complex problems. Therefore, when faced with moral dilemmas, individuals with high cognitive instability might find it challenging to decide on a course of action. This could result in longer response times especially when there are no time constraints, as they might re-evaluate available choices and decision outcomes multiple times. Conversely, when the time available for decision-making was constrained (TP group), cognitive instability exerted no influence. Thus, it can be speculated that time pressure might override the (disturbing) influence of cognitive instability by promoting focus on the task at hand, minimising the impact of internal distractions, and ensuring attention is maintained on relevant information. Interestingly, these effects appear to be specific to this facet of attentional impulsivity, as other dimensions of impulsivity showed no associations with decision times. Furthermore, our analysis accounted for symptoms of depression and anxiety.

With regards to BIS-BAS, a heightened tendency to anticipate and desire immediate reward (as indexed by the BAS Reward Responsiveness subscale) predicted a higher proportion of utilitarian choices in both groups, consistent with Moore and colleagues (2011) [20]. Therefore, regardless of the time available for decision-making, individuals who are more sensitive to rewards may be more inclined towards utilitarian responses as they may prioritise the maximum overall positive outcome (i.e., saving the majority of people). This result might seem at odds with the idea that higher reward responsiveness should be related to a clear-cut prioritisation of self-interest, which, in this context, pertains to personal survival. Indeed, a number of studies on both healthy (e.g [41]), and clinical populations (e.g [42]), have highlighted that reward sensitivity plays a significant role in increasing the propensity for immoral behaviour (e.g., voluntary deception for one’s own benefit). Concurrently, research has demonstrated that individuals high in the psychopathy trait, which is associated with alterations of the neural reward system (e.g [43]), show an increased willingness to endorse utilitarian choices (e.g [35, 44]). This propensity, though, may be attributed in psychopathy to a weaker sensitivity to consequences and a reduced concern for inflicting harm [45]. In our view, in the case of sacrificial moral dilemmas, where each choice involves the death of human beings and no choice is truly “right” or definitively moral, individuals high in reward sensitivity might still find it rewarding to help others, thus pursuing social rewards. Our findings contribute to the understanding of the complex interplay between reward sensitivity and moral behaviour, highlighting the significance of a specific contextual reward condition in which the lives of other people are at stake. Interestingly, while impulsivity has been also suggested to involve a tendency to prioritise immediate over delayed rewards (e.g [46]), this trait did not influence responses or decision times specifically related to rewards, whether perceived as the saving of a greater number of lives or as personal survival. This suggests that, at least in the context of sacrificial moral dilemmas, reward-reactivity might be related yet distinct from trait impulsivity (see [47]). Contrary to Moore et al. (2011) [20], we did not find any significant effect of BIS sensitivity. This discrepancy could once again be attributed to the different processes involved in formulating a moral judgement (as in [20]) vs. in deciding to undertake an action, as in our case. The BIS, being more focused on avoiding negative outcomes, might not have been as influential in this context, where either dilemma choice had aversive implications from a first-person perspective.

Summarising, our study revealed that the impact of time pressure on moral decision-making might be more complex and multifaceted than expected, potentially interacting with a specific facet of attentional impulsivity. When dilemma resolutions are formulated as actions to be endorsed or rejected based on a first-person perspective, decision choices do not appear to be influenced by the time available for deliberation. This indicates marked stability in behavioural responses to footbridge- and trolley-like dilemmas, as well as in the respective underlying processes. However, time pressure seemed to counteract the slowing effects of individual cognitive instability, possibly by maintaining attentional focus and thus reducing the interference from cognitive-emotional conflicts. Interestingly, individual sensitivity to reward predicted overall utilitarian choices, indicating that within sacrificial moral dilemmas, the number of lives saved can be effectively reframed as a (social) reward to be pursued. As might be expected, this broad effect was not sensitive to time pressure.

Concluding, some limitations of our study are worth mentioning. First, our paradigm did not include a “question screen” (e.g [8]), that typically follows the option and to which decision times would be time-locked. This decision was based on the idea that decision-making encompasses dynamic, overlapping processes beginning as early as the reading of the option starts. However, this implied that decision times were strictly dependent on reading times. Therefore, shorter individual reading times in the TP groups might have prevented the allotted decision time from exerting sufficient pressure. Second, although our study focused on impulsivity and BIS-BAS sensitivity, and controlled for levels of anxiety and depression, we acknowledge that other temperamental or personality traits may affect the relation between time pressure and moral choices as well. We encourage further studies to better understand this complex, multifaceted phenomenon by overcoming the limitations of the present research.

Data availability

All the data and analyses cited in this manuscript have been made publicly available within the Open Science Framework (OSF) and can be accessed at the following permanent anonymous link:



Behavioural Activation System


Beck Depression Inventory-II


Barratt Impulsiveness Scale 11


Behavioural Inhibition System


Dependent variable


False discovery rate


Generalised Linear Mixed-effects Model


Linear Mixed-effects Model




Median absolute deviation


No time pressure group


Response time


Self-Assessment Manikin


Standard deviation


Standard error


State-Trait Anxiety Inventory


Time pressure group


  1. Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD. An fMRI investigation of emotional engagement in moral judgment. Science. 2001;293(5537):2105–8.

    Article  PubMed  Google Scholar 

  2. Palmiotti GP, Del Popolo Cristaldi F, Cellini N, Lotto L, Sarlo M. Framing the outcome of moral dilemmas: effects of emotional information. Ethics Behav. 2020;30(3):213–29.

    Article  Google Scholar 

  3. Sarlo M, Lotto L, Manfrinati A, Rumiati R, Gallicchio G, Palomba D. Temporal dynamics of cognitive–emotional interplay in moral decision-making. J Cogn Neurosci. 2012;24(4):1018–29.

    Article  PubMed  Google Scholar 

  4. Lotto L, Manfrinati A, Sarlo M. A new set of moral dilemmas: norms for moral acceptability, decision times, and emotional salience. J Behav Decis Mak. 2014;27(1):57–65.

    Article  Google Scholar 

  5. Finucane ML, Alhakami A, Slovic P, Johnson SM. The affect heuristic in judgments of risks and benefits. J Behav Decis Mak. 2000;13(1):1–17.

    Article  Google Scholar 

  6. Cummins D, Cummins R. Emotion and Deliberative Reasoning in Moral Judgment. Frontiers in Psychology. 2012;3.

  7. Kroneisen M, Steghaus S. The influence of decision time on sensitivity for consequences, moral norms, and preferences for inaction: time, moral judgments, and the CNI model. J Behav Decis Mak. 2021;34(1):140–53.

    Article  Google Scholar 

  8. Suter RS, Hertwig R. Time and moral judgment. Cognition. 2011;119(3):454–8.

    Article  PubMed  Google Scholar 

  9. Trémolière B, Bonnefon JF. Efficient kill–save ratios ease up the cognitive demands on counterintuitive moral utilitarianism. Pers Soc Psychol Bull. 2014;40(7):923–30.

    Article  PubMed  Google Scholar 

  10. Yahoodik S, Samuel S, Yamani Y. Ethical decision making under time pressure: an online study. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2021;65(1):601–5.

  11. Swann WB, Gómez Á, Buhrmester MD, López-Rodríguez L, Jiménez J, Vázquez A. Contemplating the ultimate sacrifice: identity fusion channels pro-group affect, cognition, and moral decision making. J Personal Soc Psychol. 2014;106(5):713–27.

    Article  Google Scholar 

  12. Rosas A, Aguilar-Pardo D. Extreme time-pressure reveals utilitarian intuitions in sacrificial dilemmas. Think Reason. 2020;26(4):534–51.

    Article  Google Scholar 

  13. Tinghög G, Andersson D, Bonn C, Johannesson M, Kirchler M, Koppel L, et al. Intuition and moral decision-making – the effect of time pressure and cognitive load on moral judgment and altruistic behavior. PLoS ONE. 2016;11(10):e0164012.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Stenmark CK, Antes AL, Wang X, Caughron JJ, Thiel CE, Mumford MD. Strategies in forecasting outcomes in ethical decision-making: identifying and analyzing the causes of the problem. Ethics Behav. 2010;20(2):110–27.

    Article  Google Scholar 

  15. Chamberlain SR, Sahakian BJ. The neuropsychiatry of impulsivity. Curr Opin Psychiatry. 2007;20(3):255.

    Article  PubMed  Google Scholar 

  16. Carmona-Perera M, Clark L, Young L, Pérez-García M, Verdejo-García A. Impaired decoding of fear and disgust predicts utilitarian moral judgment in alcohol-dependent individuals. Alcoholism: Clin Experimental Res. 2014;38(1):179–85.

    Article  Google Scholar 

  17. Lainidi O, Karakasidou E, Montgomery A, Triad D. Impulsivity and honesty-humility and intended behavior in a prisoner’s dilemma game: a simulation study. In Review; 2021 Aug.

  18. Young S, Gudjonsson GH, Goodwin EJ, Perkins D, Morris R. A validation of a computerised task of risk-taking and moral decision-making and its association with sensation-seeking, impulsivity and sociomoral reasoning. Pers Indiv Differ. 2013;55(8):941–6.

    Article  Google Scholar 

  19. Gray JA. Brain systems that mediate both emotion and cognition. Cogn Emot. 1990;4(3):269–88.

    Article  Google Scholar 

  20. Moore AB, Stevens J, Conway ARA. Individual differences in sensitivity to reward and punishment predict moral judgment. Pers Indiv Differ. 2011;50(5):621–5.

    Article  Google Scholar 

  21. Dror IE, Basola B, Busemeyer JR. Decision making under time pressure: an independent test of sequential sampling models. Mem Cognit. 1999;27(4):713–25.

    Article  PubMed  Google Scholar 

  22. Kerstholt JH. The effect of time pressure on decision-making behaviour in a dynamic task environment. Acta Psychol. 1994;86(1):89–104.

    Article  Google Scholar 

  23. Schneider W, Eschman A, Zuccolotto A. E-prime. Pittsburgh, PA: Psychology Software Tools; 2010.

    Google Scholar 

  24. Spielberger C, Gorsuch R, Lushene R, Vagg P, Jacobs G. Manual for the state-trait anxiety inventory (form Y1 – Y2). Palo Alto, CA: Consulting Psychologists Press;: Vol. IV; 1983.

    Google Scholar 

  25. Beck AT, Steer RA, Brown G. Beck Depression Inventory. 2nd ed. San Antonio, TX, USA: The Psychological Corporation; 1996.

    Google Scholar 

  26. Paulus MP, Yu AJ. Emotion and decision-making: affect-driven belief systems in anxiety and depression. Trends Cogn Sci. 2012;16(9):476–83.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Lang PJ, Bradley MM, Cuthbert BN. International affective picture system (IAPS): affective ratings of pictures and instruction manual. Technical Report A–8 University of Florida, Gainesville, FL. 2008.

  28. Patton JH, Stanford MS, Barratt ES. Factor structure of the barratt impulsiveness scale. J Clin Psychol. 1995;51(6):768–74.

    Article  PubMed  Google Scholar 

  29. Carver CS, White TL. Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: the BIS/BAS scales. J Personal Soc Psychol. 1994;67(2):319–33.

    Article  Google Scholar 

  30. Wilcox R, Peterson TJ, McNitt-Gray JL. Data analyses when sample sizes are small: modern advances for dealing with outliers, skewed distributions, and heteroscedasticity. J Appl Biomech. 2018;34(4):258–61.

    Article  PubMed  Google Scholar 

  31. Cellini N, Mercurio M, Sarlo M. Sleeping over moral dilemmas modulates utilitarian decision-making. Curr Psychol. 2023;42(10):8244–54.

    Article  Google Scholar 

  32. Stiensmeier-Pelster J, Schürmann M. Information Processing in Decision Making under Time Pressure. In: Svenson O, Maule AJ, editors. Time Pressure and Stress in Human Judgment and Decision Making. Boston, MA: Springer US; 1993. p. 241–53.

  33. Greene JD, Morelli SA, Lowenberg K, Nystrom LE, Cohen JD. Cognitive load selectively interferes with utilitarian moral judgment. Cognition. 2008;107(3):1144–54.

    Article  PubMed  Google Scholar 

  34. Trémolière B, Neys WD, Bonnefon JF. Mortality salience and morality: thinking about death makes people less utilitarian. Cognition. 2012;124(3):379–84.

    Article  PubMed  Google Scholar 

  35. Tassy S, Oullier O, Mancini J, Wicker B. Discrepancies between judgment and choice of action in moral dilemmas. Front Psychol. 2013;4.

  36. Nichols S, Mallon R. Moral dilemmas and moral rules. Cognition. 2006;100(3):530–42.

    Article  PubMed  Google Scholar 

  37. Expósito J, Andrés-Pueyo A. The effects of impulsivity on the perceptual and decision stages in a choice reaction time task. Pers Indiv Differ. 1997;22(5):693–7.

    Article  Google Scholar 

  38. Torres A, Catena A, Megías A, Maldonado A, Cándido A, Verdejo-García A, et al. Emotional and non-emotional pathways to impulsive behavior and addiction. Front Hum Neurosci. 2013 Feb 21;7.

  39. Keilp JG, Sackeim HA, Mann JJ. Correlates of trait impulsiveness in performance measures and neuropsychological tests. Psychiatry Res. 2005;135(3):191–201.

    Article  PubMed  Google Scholar 

  40. Enticott PG, Ogloff JRP, Bradshaw JL. Associations between laboratory measures of executive inhibitory control and self-reported impulsivity. Pers Indiv Differ. 2006;41(2):285–94.

    Article  Google Scholar 

  41. Hu X, Pornpattananangkul N, Nusslock R. Executive control- and reward-related neural processes associated with the opportunity to engage in voluntary dishonest moral decision making. Cogn Affect Behav Neurosci. 2015;15(2):475–91.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Ponsi G, Scattolin M, Villa R, Aglioti SM. Human moral decision-making through the lens of Parkinson’s disease. npj Parkinsons Dis. 2021;7(1):1–7.

    Article  Google Scholar 

  43. Buckholtz JW, Treadway MT, Cowan RL, Woodward ND, Benning SD, Li R, et al. Mesolimbic dopamine reward system hypersensitivity in individuals with psychopathic traits. Nat Neurosci. 2010;13(4):419–21.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Pletti C, Lotto L, Buodo G, Sarlo M. It’s immoral, but I’d do it! Psychopathy traits affect decision-making in sacrificial dilemmas and in everyday moral situations. Br J Psychol. 2017;108(2):351–68.

    Article  PubMed  Google Scholar 

  45. Ng NL, Neumann CS, Luke DM, Gawronski B. Associations of aversive (‘dark’) traits and affiliative (‘light’) traits with moral-dilemma judgments: a preregistered exploratory analysis using the CNI model. J Res Pers. 2024;109:104450.

    Article  Google Scholar 

  46. Schmidt B, Holroyd CB, Debener S, Hewig J. I can’t wait! Neural reward signals in impulsive individuals exaggerate the difference between immediate and future rewards. Psychophysiology. 2017;54(3):409–15.

    Article  PubMed  Google Scholar 

  47. Smillie LD, Jackson CJ, Dalgleish LI. Conceptual distinctions among Carver and White’s (1994) BAS scales: a reward-reactivity versus trait impulsivity perspective. Pers Indiv Differ. 2006;40(5):1039–50.

    Article  Google Scholar 

Download references


We would like to kindly thank Prof. Lorella Lotto for her valuable contribution to defining the experimental paradigm and Dr. Carolina Pletti for her assistance in software programming.


The authors have no funding to disclose.

Open access funding provided by Università degli Studi di Padova.

Author information

Authors and Affiliations



Conception and design: MS; Collection, analysis and interpretation of data: FDPC, GPP, NC, MS; Article drafting and revising: FDPC, GPP, NC, MS. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Fiorella Del Popolo Cristaldi.

Ethics declarations

Ethics approval and consent to participate

All procedures were submitted and approved by the Ethical Committee for the Psychological Research of the University of Padua (protocol n. 2105), and were conducted in accordance with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. All participants signed written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Del Popolo Cristaldi, F., Palmiotti, G.P., Cellini, N. et al. Pulling the lever in a hurry: the influence of impulsivity and sensitivity to reward on moral decision-making under time pressure. BMC Psychol 12, 270 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: