Skip to main content

Investigating two mobile just-in-time adaptive interventions to foster psychological resilience: research protocol of the DynaM-INT study

Abstract

Background

Stress-related disorders such as anxiety and depression are highly prevalent and cause a tremendous burden for affected individuals and society. In order to improve prevention strategies, knowledge regarding resilience mechanisms and ways to boost them is highly needed. In the Dynamic Modelling of Resilience – interventional multicenter study (DynaM-INT), we will conduct a large-scale feasibility and preliminary efficacy test for two mobile- and wearable-based just-in-time adaptive interventions (JITAIs), designed to target putative resilience mechanisms. Deep participant phenotyping at baseline serves to identify individual predictors for intervention success in terms of target engagement and stress resilience.

Methods

DynaM-INT aims to recruit N = 250 healthy but vulnerable young adults in the transition phase between adolescence and adulthood (18–27 years) across five research sites (Berlin, Mainz, Nijmegen, Tel Aviv, and Warsaw). Participants are included if they report at least three negative burdensome past life events and show increased levels of internalizing symptoms while not being affected by any major mental disorder. Participants are characterized in a multimodal baseline phase, which includes neuropsychological tests, neuroimaging, bio-samples, sociodemographic and psychological questionnaires, a video-recorded interview, as well as ecological momentary assessments (EMA) and ecological physiological assessments (EPA).

Subsequently, participants are randomly assigned to one of two ecological momentary interventions (EMIs), targeting either positive cognitive reappraisal or reward sensitivity. During the following intervention phase, participants' stress responses are tracked using EMA and EPA, and JITAIs are triggered if an individually calibrated stress threshold is crossed. In a three-month-long follow-up phase, parts of the baseline characterization phase are repeated. Throughout the entire study, stressor exposure and mental health are regularly monitored to calculate stressor reactivity as a proxy for outcome resilience. The online monitoring questionnaires and the repetition of the baseline questionnaires also serve to assess target engagement.

Discussion

The DynaM-INT study intends to advance the field of resilience research by feasibility-testing two new mechanistically targeted JITAIs that aim at increasing individual stress resilience and identifying predictors for successful intervention response. Determining these predictors is an important step toward future randomized controlled trials to establish the efficacy of these interventions.

Peer Review reports

Introduction

Background

Stress-related mental disorders such as depression and anxiety disorders reside among the leading causes for disability worldwide [1,2,3] and cause a considerable burden to affected individuals, society, and the economy [4]. The general prevalence of mental disorders is particularly high in late teens and young adults in their twenties [5], with depression and anxiety showing a high rate of recurrence or persistence [6]. Although the link between stress and mental disorders has been well known for quite some time, the prevalence of stress-related disorders has not decreased during the last years [7]. Next to a failure to correctly implement clinical practice guidelines, one likely cause is the lack of appropriate and accessible prevention programs [7]. To inform prevention programs and help identifying possible prevention targets, research should ideally not only investigate contributing factors and mechanisms related to vulnerability, dysfunction, and psychopathology, but also investigate resilience, in order to identify factors and mechanisms that help people to stay healthy despite experiencing adversity [8].

Resilience can be defined as sustained or quickly recovering good mental health during and after experiencing adversity [9, 10]. This definition of resilience as an outcome rather than a trait reflects the difficulty to individually predict good long-term mental health responses to stressor exposure from a person’s stable features or predispositions and acknowledges that staying mentally healthy appears to result from putatively dynamic and complex processes allowing successful adaptation to stressors [8, 10,11,12,13,14]. These processes are not only determined by individual predisposing factors (so-called ‘resilience factors’, e.g., a certain genotype, stable personality traits, or beliefs) but also by characteristics specific to the adverse events or circumstances and an interplay between the two, and they involve the activation of protective mechanisms (‘resilience mechanisms’) at the level of the individual or the environment. Defining resilience as an outcome implies that resilience research should make use of longitudinal study designs, assessing adversity as well as mental health at several time points to capture the dynamic nature of occurring stressors and the possible subsequent changes in mental health [8, 10]. Another necessary element of resilience studies are assessments of resilience factors or mechanisms that can be linked to the outcome and which should ideally also be examined repeatedly, to thus uncover processes of adaptation [8].

Although some resilience factors are quite stable and will (mostly) not change much over the course of life (e.g., one’s genotype), other resilience factors are malleable and can undergo change, for example, triggered by the experience of adversity itself (e.g., one’s individual repertoire of emotion regulation strategies, which might increase after learning a new strategy during a period of adversity). Such individual adaptations have been termed allostatic resilience processes, as opposed to homeostatic resilience processes in which protective mechanisms are successfully engaged but an individual’s mode of operation in coping with adversity is not lastingly altered [12]. Malleable resilience factors are thus natural targets for prevention programs that aim to increase individual resilience [10, 15]. Studies have investigated several interventions designed to increase resilience, many of which focus on cognitive-behavioral or mindfulness-based methods, or a mix of both [16]. However, so far, many intervention studies to foster resilience present substantial methodological deficiencies such as missing a clear definition and operationalization of resilience, investigating effects of the intervention on single resilience factors instead of on outcome resilience, or the lack of baseline diagnostics or long-term follow-ups [17].

The current study

The interventional study DynaM-INT of the EU Horizon 2020 project consortium DynaMORE (‘Dynamic Modelling of Resilience’ [18]) is designed to investigate two mobile- and wearable-based just-in-time adaptive interventions (JITAIs) aimed at fostering resilience and to predict their success based on participants’ baseline characteristics. The target sample consists of students and apprentices between 18 and 27 years. During this period of life, several mental disorders appear for the first time or even have their peak prevalence [19], and students seem to be a particularly vulnerable group for stress-related psychopathology [20,21,22,23,24]. Youth and emerging adults are also among the groups that were most strongly mentally affected by the COVID-19 pandemic [25]. Insofar as early-onset stress-related problems are often associated with life-long mental vulnerability, investment in the mental health of emerging adults is likely to yield lasting gains and to be economically particularly efficient [26]. To ensure that we specifically include at-risk individuals, inclusion criteria include the prior experience of at least three negative life events that are perceived as burdensome [27], and a score in the mid-to-high range of the 28-item version of the General Health Questionnaire (GHQ) [28], a self-report instrument that captures internalizing symptomatology.

As a prospective-longitudinal resilience study, DynaM-INT entails a multimodal baseline characterization phase that focuses on potential resilience factors followed by longitudinal, biweekly assessments of a small number of hypothesized key resilience factors, considered potentially malleable, as well as of experienced stressors (E) and mental health problems (P) throughout the course of the study (online monitoring questionnaires). See Fig. 1 for a schematic overview of the study timeline.

Fig. 1
figure 1

Study timeline. The study involves a baseline characterization phase, an ecological momentary intervention phase, and a follow-up phase. On-site assessments are done at the beginning of the baseline and follow-up phases. In Berlin, Tel Aviv, and Warsaw, all baseline on-site assessments are conducted on one day, while in Mainz and Nijmegen, these baseline assessments are split into two days: M.I.N.I. interview and blood sampling are done on day 1, all remaining procedures are performed on day 2. On both testing days in Mainz and Nijmegen, a urine drug test is conducted. On-site assessments are complemented by regular online monitoring of stressors, mental health problems, and selected resilience factors. Abbreviations: EMA, ecological momentary assessment; EMI, ecological momentary intervention; EPA, ecological physiological assessment; JITAI EMI, just-in-time ecological momentary intervention; M.I.N.I., Mini-International Neuropsychiatric Interview

Repeated E and P monitoring implements the Frequent Stressor and Mental Health Monitoring (FRESHMO) paradigm, which we have developed specifically for the purpose of longitudinal resilience studies [12]. E and P scores are used to calculate stressor reactivity (SR) scores, the primary outcome variable and a proxy for outcome resilience [12] using a residualization approach [29, 30]. Specifically, we regress individuals’ mental health problems P on their stressor exposure E, both across all monitoring time points, to determine our sample’s normative E-P relationship. For any given individual timepoint, a participant’s regression residual from this normative E-P relationship reflects their SR relative to their current stressor exposure and the sample’s normative reactivity. Thus, positive residuals indicate that the participant experiences more mental health problems P than would be expected given their stressor exposure E (higher SR) at this time point, whereas negative residuals mean that a participant has fewer mental health problems than would be predicted at their given stressor exposure (lower SR) at this time point. Within-participant SR score time-courses will be calculated to investigate temporal fluctuations in reactivity and set these into relation with the interventions (see below) and with potential changes in resilience factors resulting from the interventions [12]. The repeated assessment of several potential resilience factors in the online monitoring questionnaires is complemented by repetitions of parts of the baseline characterization phase after six and eight months (‘follow-up phase’; Fig. 1).

Importantly, upon completion of the baseline characterization phase, participants enter an ecological momentary intervention (EMI) phase where they are randomly assigned to one of two EMIs designed by our consortium that aim to improve two distinct resilience factors: ‘ReApp’, targeting positive cognitive reappraisal of recent stressful or negative events [64], or ‘Imager’, targeting reward sensitivity by positive mental imagery [3165]. The interventions are accompanied by ecological momentary assessments (EMA) using smartphones and ecological physiological assessments (EPA) using wearables (wristbands) to assess mood and stress reaction patterns in real time during real life and to allow triggering of EMIs as JITAIs at times of high stress.

Specifically, after calibration of individual EMA and EPA thresholds for stress responses on study devices as part of baseline characterization (‘calibration week’, see Fig. 1), participants are first trained in using the assigned intervention on their own phones without concurrent EPA (‘training weeks’). Then, participants are administered three EMI ‘booster weeks’ on study devices during which real-time EMA and EPA data is used to trigger interventions specifically at moments when participants’ stress levels cross the individual threshold established during the calibration week (that is, JITAI). The rationale behind this approach is that these interventions are thought to be most effective when participants apply the previously learned cognitive strategies at moments when they are needed most [32]. These booster weeks happen every four weeks over the period of three months in the EMI phase. Between the booster weeks, participants are encouraged to continue practicing the assigned intervention (‘practice weeks’) on their own phones. Supplementary Figure S1 depicts the different assessments per week type [see Additional file 1].

Research questions

The study is primarily designed to identify baseline predictors of the effect of our JITAIs on stressor reactivity as well as target engagement, in order to inform the design of future randomized controlled trials testing the efficacy of these interventions. To prepare predictor identification, we will first evaluate intervention feasibility and efficacy. We will evaluate feasibility by testing whether EMIs with a JITAI element that uses mobile phones and wristbands to trigger interventions specifically at times of high stress can be conducted on a large scale, focusing on i) technical implementation (feasibility research question 1, fQ1) as well as ii) participant adherence (fQ2) and iii) participant experience (fQ3).

To preliminarily evaluate the efficacy, we will quantify whether, relative to baseline, the interventions are accompanied by, i) reductions in SR scores (efficacy research question 1, eQ1) and ii) increases in respective target engagement (eQ2). For target engagement specifically, we will assess changes in the use frequency of positive cognitive reappraisal during and after the ReApp JITAI and changes in reward sensitivity during and after the Imager JITAI. These patterns could be interpreted as further evidence for intervention success [31, 64]. The efficacy tests primarily use the biweekly assessed self-report measures of stressor exposure, mental health, positive cognitive reappraisal, and reward sensitivity.

Our efficacy tests will be further facilitated by the possibility to compare DynaM-INT results to data from the purely observational DynaM-OBS study [33], to which DynaM-INT is the follow-up study. DynaM-OBS uses the same type of baseline characterization and repeated assessment of E, P, and resilience factors (specifically positive cognitive reappraisal) in a study sample and over a time period comparable to DynaM-INT. DynaM-OBS thus provides us with an estimate of the natural course of SR and target engagement measures that can be used as a discovery sample and as a background against which the effects of the interventions in DynaM-INT can be assessed. Note that DynaM-OBS cannot be considered a formal control condition, but may provide an informal effect estimate justifying future randomized controlled trials (RCTs) with appropriate control conditions.

Following these evaluations of feasibility (fQ1-3) and efficacy (eQ1-2), we will address our primary research questions, namely, examining variables assessed in the baseline characterization phase to identify those that moderate (predict) the efficacy of either of the two interventions on i) stressor reactivity (primary research question 1, pQ1) and ii) target engagement (pQ2). The exact list of potential moderator variables to be examined, besides initial levels of positive cognitive reappraisal and reward sensitivity, will depend on the results of the DynaM-OBS study. Specifically, in DynaM-INT we will focus on predictors of low SR scores obtained from DynaM-OBS. These investigations aim to prepare future RCTs intended to test the efficacy of these interventions where baseline data serves to guide intervention administration only to individuals that are likely to benefit from a given intervention.

As a follow-up to our two primary research questions, we will examine whether the anticipated reductions in stressor reactivity are preceded or accompanied by the anticipated increases in target engagement (secondary research question, sQ1), which would suggest that the interventions execute their effects via the targeted resilience mechanisms.

A tertiary set of main research questions (tertiary research question, tQ) to be answered with DynaM-INT is related to Positive Appraisal Style Theory of Resilience (PASTOR) [10], the core theoretical framework of the DynaM-INT study. Positive appraisal style (PAS) is the tendency of an individual to appraise potential stressors in a positive (i.e., non-negative) way while at the same time avoiding delusionally positive appraisals. Positive appraisers typically generate appraisals that range from realistic to slightly unrealistically positive. Such a positive appraisal style is thought to enable the individual to exhibit optimal, fine-tuned stress reactions that are sufficient to cope with the stressor but that do not exceedingly exhaust resources, which reduces the likelihood of developing mental health problems in adverse life situations. PASTOR claims that PAS is the key proximal resilience factor in that the effects of all other resilience factors on outcome resilience are mediated by their effects on PAS [10]. In PASTOR, positive cognitive reappraisal is one important sub-class of cognitive processes that generate positive appraisals [10, 34], and it is therefore claimed that individuals who use positive cognitive reappraisal more frequently and/or more efficiently are likely to have higher PAS. Thus, positive cognitive reappraisal is an important component of PAS, which is why it is here targeted by the ReApp EMI. By contrast, reward sensitivity, as targeted by the Imager EMI, is a separate potential resilience factor that is thought to promote resilience insofar as it helps individuals generally appraising stressful situations in a more benign fashion, by better integrating positive information into the overall appraisal. Hence, eventually, one can assume that both the ReApp and the Imager EMIs promote resilience by promoting PAS.

PAS (like positive cognitive reappraisal and reward sensitivity) is considered a malleable resilience factor. Accordingly, in our study design, self-report measures of PAS (like measures of the two EMI targets) are not only taken in the questionnaire battery of the baseline characterization phase but also when the characterization is repeated at follow-up as well as in the biweekly online monitoring questionnaires (see Fig. 1). This allows us to ask whether the interventions are accompanied by increases in PAS relative to baseline (tQ1), whether the anticipated reductions in stressor reactivity are preceded or accompanied by the anticipated increases in PAS (tQ2), and whether the anticipated increases in PAS are preceded or accompanied by the anticipated increases in target engagement (tQ3). These findings would suggest that the interventions promote resilience by promoting PAS. Beyond intervention effects, we will examine whether individuals with high baseline PAS show less stressor reactivity (tQ4) and whether changes in PAS throughout the course of the study will be accompanied by inverse changes in stressor reactivity (tQ5), irrespective of the treatment.

The research questions are summarized in Table 1; additional exploratory research questions are outlined in the analysis section. The DynaM-INT data set will be made available to researchers to address other possible research questions.

Table 1 List of research questions

Methods

Study centers and study period

The multi-center study takes place in five research facilities: Department of Psychiatry and Neurosciences at Charité – Universitätsmedizin Berlin, Berlin, Germany; Neuroimaging Center at Johannes Gutenberg University Medical Center in Mainz, Germany; Donders Centre for Cognitive Neuroimaging and Radboud university medical center in Nijmegen, Netherlands; Sagol Brain Institute at Tel Aviv University and Tel Aviv Sourasky Medical Center, Tel Aviv, Israel, and Faculty of Psychology at University of Warsaw, Warsaw, Poland. Data acquisition started in April 2022. Completion of the baseline characterization phase is expected in May 2023, completion of the intervention phase is expected in September 2023, and completion of the follow-up phase is expected in December 2023.

Participants

In total, N = 250 healthy male and female participants are planned to be recruited at the five study sites (N = 50 each). Where a study site cannot fulfil the recruitment goal, other sites will attempt to compensate. Participants need to be 18–27 years, studying or in vocational training, have experienced at least three stressful life events [27] that they perceived as burdensome before inclusion, and report elevated levels of internalizing symptoms (a score of ≥ 20 in the GHQ, 28-item version [28]). All inclusion criteria are provided in Table 2.

Table 2 List of inclusion criteria and format in which they were assessed

Design

As shown in Fig. 1, the DynaM-INT study follows a prospective-longitudinal design, consisting of an (online) pre-screening for eligibility, a multimodal baseline characterization phase (including neuropsychological tests, neuroimaging, bio-samples, a sociodemographic and psychological questionnaire battery, a video-recorded interview), a calibration week where individual stress thresholds are being determined based on ecological momentary assessments (EMA) and ecological physiological assessments (EPA)), an ecological momentary intervention (EMI) phase (including two training weeks where participants get familiar with one of two randomly assigned interventions, three separated booster weeks where JITAIs are triggered at times of high stress, intermittent optional EMI practice weeks without JITAI, and another video-recorded interview), and a follow-up phase where parts of the baseline characterization phase are repeated (including the psychological questionnaire battery, bio-samples, and the video-recorded interview). In addition, biweekly online monitoring questionnaires are assessed throughout the course of the study. For an extensive overview of all measures used and the days (d), weeks (w) and months (M) from baseline at which they are assessed (x), see Table 3.

Table 3 Overview of the measures used and the weeks (w) and months (M) from baseline, at which they are assessed (x: all sites; a: Mainz and Nijmegen; b: Berlin, Tel Aviv, and Warsaw)

Procedures

Recruitment and screening

Participants are recruited via e-mail distribution lists, social media advertisements, flyers, digital blackboards, and word-of-mouth. As a first step, potential participants are asked to fill out an anonymous online screening survey on SoSci Survey [36] that checks for inclusion criteria (Table 2) via an automated algorithm. To be able to link the pre-screening data to the study ID, potential participants generate an individual code that will be re-created on-site upon inclusion. Eligible participants receive an e-mail with the invitation to contact their study site to schedule a phone call.

Further inclusion criteria regarding past and present psychiatric diagnoses are assessed by trained staff using the Mini-International Neuropsychiatric Interview (M.I.N.I.) [35]. In Berlin, Tel Aviv, and Warsaw, the M.I.N.I. is conducted on the phone and all records are destroyed afterwards. Eligible participants are then scheduled for the baseline characterization phase. In Mainz and Nijmegen, participants receive an appointment for the first day of baseline assessments during which the M.I.N.I. is conducted and participants who are not eligible are treated as dropouts.

Baseline characterization phase (month 1)

Participants are characterized in a multimodal baseline characterization phase, consisting of on-site assessments, as well as online questionnaires and assessments in daily life. An overview of all procedural steps of the baseline assessments can be found in Table 4.

Table 4 Procedure steps at baseline

In Berlin, Tel Aviv, and Warsaw, all on-site baseline assessments are conducted on one day (“day 1 + day 2” in Table 4). In Mainz and Nijmegen, on-site baseline assessments are split into two days: In Nijmegen, the M.I.N.I., and blood sampling are done on day 1; in Mainz, the M.I.N.I., blood sampling, and EMA/EPA briefing are done on day 1. All remaining on-site assessments are performed on day 2. In Berlin, Tel Aviv, and Warsaw, participants spend approximately 4 h in the laboratory during day 1. In Mainz and Nijmegen, they are present for approximately 1 and 3 h(s) on day 1 and 2, respectively.

All participants receive written and verbal information about the study and provide written informed consent at the start of the baseline assessment. Next, (at the start of both baseline days in the case of Mainz and Nijmegen), participants undergo a urine-based drug screening test (SureStep™ Multi-Drug One Step Screen Test Panel, Innovacon Inc., USA) for amphetamine, barbiturates, benzodiazepines, buprenorphine, clonazepam, cocaine, fentanyl, heroin, ketamine, cannabis, methadone, methamphetamine, methylenedioxymethamphetamine, morphine, opiate oxycodone, phencyclidine, propoxyphene, tramadol, and tricyclic antidepressants. After a negative test, participants continued with the tests.

Neuropsychological tests

Following inclusion, two neuropsychological tests are conducted: the Trail Making Test [37, 38], assessing visual attention and task switching speed, and the HAWIE Digit Symbol Test [39], measuring processing speed.

Neuroimaging

Participants receive a brief training of the neuroimaging paradigms, during which the experimenter provides verbal explanations, asks questions, and makes sure the participant understood the instructions, while showing an on-screen presentation of the tasks. Data acquisition parameters and the individual neuroimaging tasks are described in detail below.

When placed in the magnetic resonance imaging (MRI) scanner, participants are provided with earplugs. They receive a 4-button Inline Fiber Optic Response Pad (Current Design [40], in Berlin, Mainz, Nijmegen, and Tel Aviv; in-house developed system in Warsaw) to their right hand. They are presented with the visual stimulation of the tasks via a mirror placed on the head coil that shows a monitor placed behind the scanner bore. Before and after each task, the experimenter gives verbal instructions and receives feedback from the participant via an intercom system. The specific instructions are also shown on the screen before each task. After scanning, participants are asked to fill out an MRI exit interview questionnaire, asking about experiences and potential difficulties with the fMRI tasks, via SoSci Survey [36].

Participants who are not eligible for undergoing the MRI procedure skip the neuroimaging procedures and take part in all other parts of the study.

Bio-samples

From each participant, 9 ml of blood (in Nijmegen: 10 ml) is drawn into an EDTA tube (red monovette; Sarstedt, Nümbrecht, Germany) and stored as whole-blood at -20 °C or colder until assay of DNA and DNA-methylation. In Mainz and Nijmegen, an additional 9 ml (Mainz) or 10 ml (Nijmegen) of blood are sampled into EDTA tubes for proteomic analyses. To limit the influence of metabolism or diurnal oscillations on proteomics measurements, at these two sites blood is drawn between 10:30 and 14:30 and participants are instructed to arrive at least five hours sober. Blood samples for proteomics assay are centrifuged and serum is divided into 8–16 aliquots (depending on volume), which are stored at -80 °C until assay. In Tel Aviv, one additional tube (VACUETTE® TUBE 5 ml CAT Serum Separator Clot Activator) of blood is taken at each sampling time point to derive CRP.

Stool samples are collected using an OMNIgene-gut feces kit (OM-200, DNAgenotek). Participants receive a test kit, an instruction sheet about the collection procedure, the Bristol Stool Scale [41], and a verbal instruction. They are instructed to collect the stool sample as close as possible to the return appointment, to take numerous small samples from different locations in the stool material, to fill out the Bristol Stool Scale, and to store the sample at a dark place without direct sunlight until returning it at the next appointment. Stool samples are subsequently stored at -20 °C until assay of gut microbiome, or, in Nijmegen, directly shipped to the laboratory processing the microbiome.

Post-assessment procedures

At the end of the baseline day(s) (and each subsequent appointment), participants are asked if they have experienced emotional disturbances triggered by any element of the preceding session in a standardized interview, to ensure their well-being. In case they report emotional disturbance and a need for help, participants are directed to a site-specific clinician associated with the study.

Online questionnaires

Following the on-site baseline day (Mainz and Nijmegen: baseline day 2), a schedule with the participant’s dates for all questionnaires is uploaded to SoSci Survery [36] to enable automatic e-mail dispatch. The schedule consists of an extended questionnaire battery, as well as shorter, biweekly monitoring questionnaires, used for the high-frequent longitudinal assessment of stressors and mental health (FRESHMO paradigm) as well as of malleable resilience factors (RFs) throughout the entire study [12]. RFs are assessed as trait or style (the typical way or tendency in which a person reacts to life experiences) during the extended online batteries and as a mode (the extent to which the RF was used or experienced in the past two weeks [42]) during the biweekly monitoring questionnaires. Table 5 provides an overview.

Table 5 List of online self-report questionnaires

The extended questionnaire battery is administered as part of the baseline characterization phase and is sent out immediately. Participants are asked to finish the online questionnaire battery within one week. Also, three biweekly monitoring questionnaires form part of the baseline characterization phase (see Fig. 1). Participants have two days to fill out those shorter questionnaires.

Video-recorded interview

Besides traditional self-report instruments, the online questionnaire schedule contains a self-developed, fully structured and video-recorded interview asking participants about their experience of mental health problems as well as recent and upcoming emotional events. In each interview, participants record short video segments of themselves answering the respective questions. These interviews provide audio-visual data to identify interview-based digital biomarkers of mental health (DBMs). Details are given below.

Calibration week

In the week following the on-site baseline assessment day(s), EMA and EPA data is collected. Participants use a study smartphone (Motorola Moto E6 Play in Berlin, Mainz, Nijmegen, and Warsaw; Xiaomi Redmi 7/7A in Tel Aviv) with the RADAR aRMT app (adapted for the use in DynaM-INT) for EMA data collection [51] and the Chill + wristband (developed by IMEC [52]) for EPA data collection. Participants receive a thorough explanation about the EMA and EPA devices, applications, and procedures.

Each day during usual waking hours (between 7:30 and 22:30), questionnaires of around 2 min length each are sent at 10 different time points (“beeps”) via push notifications to the smartphone. Each notification is semi-randomly scheduled to be sent out in a block of 90 min. The beep schedule is the same for all participants and is specified in Supplementary Table S1; EMA content can be found in Supplementary Figure S2 [see Additional file 1]. Each beep questionnaire remains online for 10 min, and participants receive a reminder notification 5 min after the initial beep notification.

EPA data is collected via the wristband for 16 h per day. The wristband also features a “stress” button that participants are instructed to press when they experience a stressful event. The calibration week lasts for six days.

All EMA data collected with the RADAR aRMT app is immediately and automatically uploaded to a server at the Donders Institute, where the initial feature extraction takes place in real time. After completion of each EMA questionnaire (via the RADAR aRMT app) participants are redirected to the DynaMORE Chill + app (developed by IMEC for the use in DynaM-INT) to upload 10 min of EPA data acquired right before each EMA notification to the server at the Donders Institute, where relevant features are extracted and motion-related artifacts are removed. A complete list of features is given in Supplementary Table S2 [see Additional file 1].

After the calibration week has finished, participants come back to the lab to return study devices. All data collected with the Chill + app is downloaded by the researchers for additional offline feature extraction (of the entire 6 days × 16 h EPA data). The baseline characterization phase is completed by randomly assigning one of the interventions to the participant based on a predetermined randomization sheet (computerized random numbers to 1 of 2 EMIs).

Ecological momentary intervention phase (months 2–5)

The ecological momentary intervention (EMI) phase consists of two training weeks, three booster weeks, and nine encouraged practice weeks. See Fig. 1. Also, online monitoring questionnaires continue to be sent to participants in a biweekly manner throughout the entire EMI phase. The video-recorded interview is repeated during the month 3—week 4 monitoring questionnaire.

Training weeks

Before the start of the two training weeks (14 days), participants receive a briefing on their assigned intervention (ReApp or Imager EMI) via a video call. Subsequently, they install the SEMA3 app [49] on their own phone and enroll for the assigned EMI. The purpose of the training weeks is to familiarize the participants with the assigned intervention and to initiate habitual use of the cognitive techniques taught by the app. Participants receive three daily EMIs via push notifications, scheduled throughout the day during pseudo-random one-hour time windows (at 10:00, 14:30, and 19:00). Participants have 20 min to execute the EMI after they receive the push notification. Researchers are automatically notified by mail if compliance drops below 60%. In that case, participants are contacted to resolve potential problems. In addition, participants are asked to complete one EMI before going to bed (on demand). Participants are encouraged to manually start (additional) interventions whenever they want to. EMIs are always preceded by an EMA, which is identical to the EMAs performed during calibration. EMI and EMA content is given in the SEMA3 app during the training weeks.

Booster weeks

Before the start of the first booster week, participants receive a refresher briefing, either in person when they pick up their devices, or via a video call. During the booster weeks, EMA and EPA data are collected analogously to the calibration week, using the RADAR aRMT app on study smartphones and Chill + wristbands. Incoming EMA and EPA data are analyzed in real time on a high-performance computing cluster at the Donders Centre for Cognitive Neuroimaging in Nijmegen. If the combination of extracted features exceeds the individual threshold (set to a goal of triggering three interventions per day, based on stressful situations from the calibration week), the assigned intervention is immediately triggered via the RADAR-BASE platform. The intervention arrives ~ 20 min after the start of the EMA questionnaires. A maximum of four interventions are triggered per day. Thresholds are adjusted on a daily basis to accommodate signal drift.

Additionally, each day starts with a morning questionnaire and ends with an evening questionnaire also shown in the RADAR aRMT app on the study smartphone, given in Supplementary Table S2 [see Additional file 1]. The evening questionnaire is followed by an additional intervention, ensuring that all participants receive at least one intervention per day. Participants are encouraged to start additional interventions themselves whenever they want to. Each booster week lasts for six days.

Practice weeks

Participants are encouraged to use the SEMA3 app on their own phone during the remaining weeks of the EMI phase (i.e., during the weeks in between booster weeks). During these encouraged practice weeks, participants do not receive notifications but are instructed to complete EMIs whenever they want to. Again, EMIs are always preceded by an EMA.

Follow-up phase (months 6–8)

Online monitoring continues during the follow-up phase and changes from biweekly to once a month during months 7 and 8. The extended online questionnaire battery is repeated during month 6—week 2 and month 8—week 4. Both assessments also include the video-recorded interview. In month 6—week 2, user experience of the JITAI is assessed with an adapted version of the user version of the Mobile Application Rating Scale (uMARS) questionnaire [53]. Follow-up blood and stool samples are also collected in month 6—week 2. See Fig. 1.

Remuneration

Complete participation in all assessments is remunerated with 340 EUR (in Tel Aviv 1200 NIS, in Warsaw 1200 PLN). Further, participants can win on average 10 EUR (40 NIS, 40 PLN) during the Monetary Incentive Delay task in the neuroimaging battery. Participants who finish all assessments are additionally included in a lottery to win a 100 EUR / 400 NIS / 400 PLN voucher on top (five vouchers in Berlin, Mainz, Nijmegen, and Tel Aviv; one in Warsaw). To maintain compliance throughout the longitudinal assessments, money is disbursed in tranches at different time points throughout the study, depicted in Supplementary Table S3 [see Additional file 1].

Materials

Neuroimaging

MRI data acquisition

In Berlin, Mainz, Nijmegen, and Tel Aviv, brain imaging data are acquired on identical models of 3 T MAGNETOM Prisma systems (Siemens Healthineers, Erlangen, Germany) with 32-channel head coils (Tel Aviv: 64-channel head coil) using the following settings: Multiband gradient-echo echo planar imaging (EPI) sequences (TR = 800 ms, TE = 37 ms, flip angle = 52°, FOV = 208 mm, voxel size = 2.0 × 2.0 × 2.0 mm, 72 slices, MB acceleration factor = 8, phase-encoding direction = PA) from the Center for Magnetic Resonance Research, University of Minnesota, as adopted from the Human Connectome Project, are used for blood oxygen-level dependent (BOLD) fMRI [53]. Before each task, a pair of blip-up/blip-down EPI sequences is acquired (TR = 8000 ms, TE = 66 ms, flip angle = 90°, FOV = 208 mm, voxel size = 2.0 × 2.0 × 2.0 mm), one with an AP and one with a PA phase-encoding direction. Furthermore, a T1-MPRAGE sequence (TR = 2500 ms, TE = 2.22 ms, flip angle = 8°, FOV = 256 mm, voxel size = 0.8 × 0.8 × 0.8 mm) and a FLAIR sequence (TR = 9000 ms, TE = 83 ms, flip angle = 150°, FOV = 220 mm, voxel size = 0.7 × 0.7 × 3.0 mm) are acquired.

In Warsaw, a 3 T MAGNETO Trio system (Siemens, Germany) is used until October 2022. There, multiband gradient-echo EPI sequences are acquired with the following settings: TR = 1410 ms, TE = 30.4 ms, flip angle = 56°, FOV = 210 mm, voxel size = 2.5 × 2.5 × 2.5 mm, 60 slices, MB acceleration factor = 3, phase-encoding direction = PA. Additionally, blip-up/blip-down EPI sequences before each task (identical settings as other sites, except for voxel size = 2.5 × 2.5 × 2.5 mm), a T1-MPRAGE (TR = 1100 ms, TE = 3.32 ms, flip angle = 7°, FOV = 256 mm, voxel size = 1.0 × 1.0 × 1.0 mm), and a FLAIR sequence with identical settings as above are acquired. In October 2022, Warsaw replaced the Trio system with a 3 T MAGNETOM Prisma system (Siemens Healthineers, Erlangen, Germany) with 32-channel head coils using the same settings as Berlin, Mainz, Nijmegen, and Tel Aviv (described above).

Head movement is restricted by foam pads and tape on the forehead. All task paradigms are presented using the software Presentation® (Neurobehavioral systems [54]) on a monitor placed behind the scanner bore via a mirror that is fixed on the head coil.

Reward sensitivity task

An adapted version of the Monetary Incentive Delay Task (MID) [55] is used to measure neural responses during anticipation and receipt of rewards and losses [56]. Participants are told that they can win or lose a small amount of money if they press a button fast enough once a target stimulus (white star) appears on the screen. Right before the target appears, a cue that is presented for 2 s indicates whether they can win (+ 3€/12NIS/12PLN, + 0.5€/2NIS/2PLN), lose (-0.5€/2NIS/2PLN, -3€/12NIS/12PLN) or neither win nor lose (0€/NIS/PLN) money during the following trial. The cue is followed by a jittered anticipation phase of 2–2.5 s, after which participants need to press a button with their index finger as soon as the target stimulus appears on the screen. Each trial ends with a 2 s numeric feedback on subjects’ trial outcome as well as the overall gain. An adaptive algorithm is applied that changes the duration of target presentation for the participant within each condition based on their past performance to ensure that the experience of reward does not differ between subjects depending on their task performance. If the participant’s hit rate is below 66%, the target duration is increased by 25 ms; else, it is reduced by 25 ms. Reaction times and hit rates are collected as behavioral outcomes. A graphical depiction of the task design is provided in Supplementary Figure S3 [see Additional file 1]. The reward sensitivity task was used identically in the DynaM-OBS study [33] and the Mainz Resilience Project (MARP) study [56, 57].

Note that the DynaM-OBS data set will be used to identify the reward-related behavioral and neural measures from the task that are prospectively most strongly negatively associated with participants’ SR scores during that study [33]. These will be used in DynaM-INT as baseline indices of the targeted resilience factor reward sensitivity, complementary to the questionnaire-based self-report measures (see below). They will be tested in the main analyses of DynaM-INT as potential moderators of intervention effects (primary research questions on intervention success prediction, see Introduction and Table 1).

Situation-focused volitional reappraisal task

In the situation-focused volitional reappraisal task, assessing the ability to use positive cognitive reappraisal (reappraisal efficacy, reappraisal performance), participants are instructed to positively reinterpret or just view photographs which are either negative, positive, or neutral and to subsequently rate their affective state on a non-verbal scale [56, 58]. Stimuli were selected from the International Affective Picture System (IAPS) [59] and EmoPics [60] based on normative ratings regarding valence and arousal. For details on the task design, see Supplementary Figure S4 [see Additional file 1]. The situation-focused volitional reappraisal task was used identically in the DynaM-OBS study [33]. Timing of the current task is identical to the MARP study [56, 57], but a different set of IAPS/EmoPics stimuli [59, 60] is used.

Note that the same approach as above for the reward sensitivity task will be used to decide which measures from this task to include in the main analyses of DynaM-INT.

Implicit emotion processing task

An adaptation of the face matching task [61, 62] is used to assess the participants’ neural responses during implicit emotion processing. In each trial, participants are presented with one picture at the top and two pictures at the bottom part of the screen, of which one is identical to the upper one. They are instructed to select the matching picture from the bottom row by pressing a button. In the emotion condition, the pictures are grayscale photographs of Ekman faces [43] with angry or fearful expressions. Faces are counterbalanced for sex and emotional valence. In the control condition, the pictures contain geometric shapes (circles, horizontal ellipses, and vertical ellipses). Four blocks per condition, each consisting of one instruction (2 s) and 6 trials (5 s each), are alternately presented. Details are given in Supplementary Figure S5 [see Additional file 1]. The implicit emotion processing task was used identically in the DynaM-OBS study [33].

Resting state

A 7-min resting-state scan is acquired during which participants are instructed to keep their eyes open and focus on a fixation cross in the middle of the screen. An identical resting-state scan was collected in the DynaM-OBS study [33]. In the MARP study [56, 57], a 6-min resting-state scan was included.

Online questionnaires

The assessment schedule of online questionnaires is outlined in Table 3.

Items of the extended questionnaire battery assess socio-demographic information at month 1 (study baseline), and general health, stressor exposure, mental health, as well as potential psycho-social resilience and risk factors (collectively termed ‘RFs’) at months 1, 6 and 8. RFs included in the battery are assessed as relatively stable styles or traits (i.e., the typical way or tendency in which a person reacts to life experiences). The measures included in the extended questionnaire battery at study baseline will be employed as potential moderators of intervention effects on the primary outcome variables, SR scores and target engagement (see primary research questions in Introduction and Table 1).

The biweekly monitoring questionnaires administered throughout the course of the study assess further information on stressor exposure, mental health, and central RFs necessary to calculate SR scores and target engagement measures as the main outcome variables. To build biweekly SR scores, these questionnaires contain repeated measures of mental health problems (P), assessed by the GHQ-28 [28], and on stressor exposure (E), assessed primarily via a daily hassles list (MIMIS, [44]). Further E measures assessed during the biweekly monitoring, related for example to the COVID pandemic, will be explored for their additional relevance when calculating SR scores (see Table 3).

Target engagement for ReApp is operationalized as the self-reported use frequency of positive cognitive reappraisal (assessed with the items on acceptance, positive reappraisal, putting into perspective, and distancing in the PASS-process questionnaire) and for Imager as the self-reported reward sensitivity (assessed using anticipatory items of the TEPS questionnaire). While RFs included in the extended questionnaire battery are assessed as relatively stable styles, RFs included in the biweekly monitoring were altered to be assessed as modes (i.e., the extent to which the RF was used or experienced in the past two weeks [42]). Complementary and secondary to the biweekly mode assessments, target engagement will also be determined from the corresponding style measures in the extended questionnaire battery.

Finally, biweekly monitoring questionnaires also include additional assessments of self-reported positive appraisals (crisis-related positive appraisals and content-focused perceived positive appraisal). These are not primary measures of target engagement and rather used in moderating analyses and to address tertiary research questions.

Table 5 provides a detailed overview of all questionnaires used in the DynaM-INT study. Validated versions of the questionnaires and their translations to the site-specific languages are used whenever available. An overview of questionnaire validations for the different study languages, as well as the self-developed questionnaires can be found on OSF [63].

Video-recorded interview

Each video-recorded interview comprises 13 questions on current mental health problems and recent or future experiences (40 s per recorded answer). Eight questions are based on the four subscales of the GHQ-28 [28] that represent four symptom clusters of psychological distress (somatic complaints, anxiety/insomnia, social dysfunction, and severe depression), with two interview questions per cluster. Four other questions ask about recent positive and negative memories or future expectations, respectively. One additional neutral question serves to establish a baseline for participants’ facial expressivity and vocal features.

Using pretrained open-source algorithms, a comprehensive set of potential DBMs will be extracted from the audio and video material, which roughly fall into four categories: facial expressivity (e.g., positive and negative emotions and overall expressivity), vocal features (e.g., voice pitch and shimmering), movement (e.g., gaze and head movement), and speech content (e.g., the sentiment of answers and word usage). A detailed description of the interview and the analysis will be provided elsewhere.

Ecological momentary and physiological assessments

Each EMA questionnaire includes in-the-moment self-assessments of mood (affect), social context, physical context, past event appraisal, and future event appraisal. The morning questionnaire (~1 min) contains questions regarding the last night’s sleep and the phase of the menstrual cycle. The evening questionnaire (~1 min) contains questions regarding the evaluation of the day, as well as stress anticipation of the upcoming day. Supplementary Figures S1 and S2 provide an overview of all assessed EMA items [see Additional file 1].

The Chill + collects four types of EPA-data: photoplethysmogram (PPG, containing infrared and green PPG), galvanic skin response (GSR, containing a signal capped at 2 microSiemens (μS) and one at 20μS), skin temperature (ST) and accelerometer (ACC, in x, y and z direction) data.

Feature extraction

Real-time feature extraction and analysis of EMA and EPA data for the purpose of stress-level determination rely on two separate data streams. The upload of EMA data to the Donders Centre for Cognitive Neuroimaging in Nijmegen is implemented in the RADAR-BASE platform. Feature extraction consists of averaging (per EMA beep) all reversed positive affect and all negative affect scores. Negative affect is based on EMA items: “I feel irritated, anxious, insecure and sad”; and positive affect is based on EMA items: “I feel happy, satisfied and relaxed”.

The upload of the EPA data is implemented in the DynaMORE chill + app, which enables a Bluetooth connection between the phone and the Chill + device. The DynaMORE chill + app collects 10 min of data prior to the EMA prompt time and sends it to the server hosted by the Donders Centre for Cognitive Neuroimaging in Nijmegen. The feature extraction algorithm considers quality of incoming data, meaning that it will only calculate features based on good quality. The 10 min of data are analyzed in one-minute windows. The results of those separate windows are combined to obtain one value per feature for each data subset of 10 min. Features directly used in the real-time decision algorithm (described below) are the number of spontaneous skin conductance responses, magnitude of spontaneous skin conductance responses, maximum heart rate, and mean heart rate. The number of Chill + button presses (indicating subjectively reported stress moments) is also counted. Details are given in Supplementary Table S2 [see Additional file 1].

Threshold calculation

The features from the calibration week during the baseline characterization phase are used to calculate individual EMA/EPA baseline distribution parameters and thresholds for the JITAI triggering during the later intervention phase (booster weeks). For each of the included EMA and EPA features, individualized means and standard deviations are calculated and stored, which are later used to Z-score real-time data for each feature (i.e., relative to the individual baseline distribution).

All EMA features are Z-transformed and averaged into an average EMA Z-score. All EPA features are Z-transformed and averaged into an average EPA Z-score. We then fit a linear regression between the total magnitude of motion based on accelerometer data, and the averaged Z-transformed EPA value. From this regression, the slope and intercept are also stored to residualize the EPA features with respect to motion during real-time analysis in the intervention phase. Finally, EMA Z- scores and motion-corrected average EPA Z-scores are averaged to create a distribution of combined EMA/EPA Z-scores. The initial triggering threshold for EMIs in the first booster week is set at 60% of this distribution (i.e., this value is exceeded in 40% of EMA/EPA beeps in the calibration week), aiming at three interventions per day, with an expected loss of 30% of beeps per day.

Real-time decision algorithm

EMA and EPA data collected during the booster weeks in the intervention phase is compared to individual baseline distribution parameters to decide whether an intervention is triggered at that moment. For each new incoming set of EMA/EPA data (i.e., each beep), relevant features are calculated and standardized using the individual baseline distribution parameters (mean and standard deviation of that feature in the calibration week). Z-transformed EMA features are then averaged, resulting in an EMA Z-score for that beep. Z-transformed EPA features are also averaged and then residualized with respect to motion based on the total magnitude of motion obtained from the accelerometers during the same 10-min EPA recording (and using the regression parameters obtained from the calibration week), resulting in the motion-corrected EPA Z-score. Finally, the EMA Z-score and the motion-corrected EPA Z-score are averaged to result in the combined EMA/EPA Z-score.

If there have been less than four interventions triggered for that particular participant in that day, the combined EMA/EPA Z-score is compared to the EMI triggering threshold, which was initially derived from the calibration week data. If this Z-score exceeds the threshold, or if there was a stress button press on the Chill + in the 10 min preceding the EMA questionnaire, an intervention will start. If for a given beep no (high quality) EPA data is available, the decision will be based on EMA features only.

Threshold adjustment algorithm

In addition to this algorithm, which is run after each beep, another algorithm which serves to dynamically adapt the triggering threshold is run each night. This second algorithm keeps track of the number of interventions per day and decreases the combined Z threshold at the end of the day by 0.01 if there have been too few interventions (< 3), or raises this threshold by 0.01 if there have been too many (> 3).

Ecological momentary interventions

Intervention 1: ReApp

The first intervention is targeting positive cognitive reappraisal. In this intervention, participants are asked to think about negative events they experienced or are about to experience in the close future and positively reinterpret them by generating positive reappraisals (e.g., learning from the event, the event has some unexpected positive aspects, advice that they would give to a friend, advice that they would receive from a friend). For details, see [64]. One intervention takes about 2–3 min.

Intervention 2: Imager

The second intervention is targeting reward sensitivity via the use of positive mental imagery. In this intervention, participants are asked to think about a pleasurable event that might happen to them during that day and create a mental image of the situation. For details, see [31, 65]. One intervention takes about 2–3 min.

Data analysis

To evaluate the above research questions, we will conduct two sets of preparatory analyses (addressing feasibility and efficacy), and three sets of main analyses (addressing primary, secondary and tertiary research questions). See Introduction and Table 1.

Preparatory feasibility questions (fQ)

The first preparatory analysis addresses the feasibility of the just-in-time-adaptive EMIs that are triggered at moments of high psychological and/or physiological stress. We will consider the technical implementation (fQ1) as well as participant’s adherence (fQ2) and experience (fQ3). These analyses have a descriptive character and may additionally inform exclusion criteria for the main analysis.

To assess the technical implementation of our real-time decision pipeline (fQ1), we will assess the percentage of completed EMA beeps that yielded successful EPA uploads and feature extractions per booster week, the number of minutes per EPA upload in those weeks, and the percentage of triggered interventions per day in each booster week. Further, we will compare the EMA and EPA features of beeps that did and did not trigger an intervention to investigate whether we indeed captured the most stressful moments of the day. Finally, we will examine whether the threshold adjustment algorithm works as expected, by comparing the percentage of triggered interventions per week to the percentage of interventions that would be triggered based on a fixed threshold (i.e., without threshold adjustment algorithm).

To assess adherence (fQ2), we will determine the percentage of completed EMA questionnaires, the percentage of completed triggered interventions, the number of completed self-triggered interventions, the total intervention adherence (i.e., the total number of completed triggered and self-triggered interventions), and the time spent using the aRMT application. All adherence measures will be calculated for each booster week separately, as well as summed for all booster weeks. The percentage of completed EMA questionnaires will additionally be calculated for the calibration week.

User experience (fQ3) is assessed with a shortened version of the user version of the Mobile Application Rating Scale (uMARS) questionnaire [52], which is applied as part of the second extended online questionnaire battery in month 6 in the beginning of the follow-up phase (see Table 3) In addition to the general questions on app usability, we will specifically focus on user experience Q1 (“What changes did you observe, for example, in your mood, in your behavior etc., while using the app?”) and Q2 (“Did the app help you use skills during relatively stressful periods?”) for the feasibility research question.

Preparatory efficacy questions (eQ)

The second preparatory analyses address intervention effects on participants’ individual stressor reactivity (SR) scores (eQ1) and target engagement (eQ2). Estimating training efficacy forms the basis for our main analyses of effect moderation (below) and will be achieved by comparing outcome scores during the training period (the intervention phase) to the pre-training baseline (the baseline characterization phase; see Fig. 1).

We choose to examine the overall intervention phase as the outcome phase because the mHealth literature suggests different time-courses over which training effects on health and wellbeing may emerge. For example, a recent meta-analysis reports that only 8–12 week-long resilience interventions already affect different measures of resilience [66], but effects are not sustained at short-term (< 3 months post intervention), medium-term (3–6 months post intervention), or long-term follow-ups (> 6 months post intervention). For other health and wellbeing outcomes, there is evidence of incubation effects. The same meta-analysis shows delay benefits for anxiety and stress measures, which were not reduced post intervention but at short-term follow-up. A meta-analysis of mHealth interventions also reports increasing estimated effect sizes on health outcomes with prolonged follow-up (up to 9 months) [67]. Considering that our resilience operationalization via SR scores aims to improve on previous resilience measures [12] and involves residualized mental health outcomes, effects in the present study may follow either pattern. The use of novel EMIs with a JITAI element in the present study adds further uncertainty. Intervention effects on SR scores and target engagement may thus emerge already after weeks or only after months of training.

We will estimate intervention effects using linear mixed models with repeated SR or target engagement measures (as either modes or styles) as endpoints, comparing measurements that are part of the baseline to those derived during the intervention training period. Long-term follow-up measurements will be treated separately. Our hypothesis is that participants develop lower SR scores and higher target engagement during the interventions.

Primary research questions (pQ)

Our primary analysis goal is to assess whether variables (RFs) assessed at study baseline moderate (predict) the effect of ReApp, Imager, or both interventions on SR scores (pQ1) and target engagement measures (pQ2). We will address the pQ1 and pQ2 hypotheses statistically by evaluating the interaction between a given baseline variable and the respective intervention effect estimate, based on the efficacy questions (eQ). Depending on the strength of moderation, training effects may only be detected for a subgroup of participants (see e.g., [64]), such that group-level efficacy is not a prerequisite for addressing these primary research questions. While many baseline variables qualify as potential moderators, the most important ones are the self-reported use frequency of positive cognitive reappraisal for the ReApp intervention, and self-reported reward sensitivity for the Imager intervention (see Online Questionnaires for definition of variables). We hypothesize that lower baseline levels of these resilience factors will be associated with stronger effects of the respective intervention on SR scores (pQ1) and on target engagement (pQ2).

Regarding the potential moderating influence of other psychosocial and neurobiological RFs, the exact analysis plan will depend on the results of the corresponding analyses in our DynaM-OBS observational study [33], which we use as a discovery sample to derive hypothesized moderators and strength of hypotheses (e.g., secondary, tertiary, exploratory).

Given that the two EMIs have differing mechanistic targets, we will first evaluate moderation effects separately in each of the intervention groups. It is also possible that both interventions have unifying moderators, such as PAS. Following separate analyses, if we observe or hypothesize a joint mechanism (such as ultimate effect mediation in both interventions by increases in PAS, see Introduction), we will therefore pool participants over both interventions for combined efficacy and moderation analyses, maximizing analysis power. On the contrary, if we observe or hypothesize potentially differential results, we may instead contrast the two interventions for their main effects and effect moderation. As effect sizes in intervention comparisons are typically relatively small and result in power issues, we consider the latter analyses exploratory.

The above-described linear mixed models represent omnibus analyses of outcome measures across the entire intervention training period. They may thus be followed by post-hoc contrasts of individual measurement time points within the mixed-model framework, allowing us to explore sensitive periods for intervention effects.

Supplemental analysis approaches

Next to the above outlined moderation analyses using interaction terms, we will also examine simpler prospective associations between baseline variables of interest and repeated SR score measurements in separate regression models. We aim to replicate associations found in DynaM-OBS [33], and also to compare intervention-related associations in DynaM-INT with associations in natural time-courses in DynaM-OBS. Further analyses may involve DynaM-OBS data [33] as an informal control condition against which the effects of the interventions in DynaM-INT can be assessed. Finally, we will also employ the DynaM-OBS study [33] to explore the applicability of more complex time-series analyses, and to examine the relationship between the different positive appraisal-related measures beyond positive cognitive reappraisal frequency in DynaM-OBS and then try to replicate the result in DynaM-INT.

Secondary research question (sQ1)

Our secondary research question is whether the anticipated reductions in stressor reactivity are preceded or accompanied by the anticipated increases in target engagement (sQ1), which would suggest that the interventions work via the targeted resilience mechanisms. To address this question, we will employ linear mixed models for SR-target engagement covariance and lagged associations, respectively. Again, DynaM-OBS [33] results will be consulted to inform the modelling of more complex time-series analyses for example between positive cognitive reappraisal and SR, such as the size of the time lag associations.

Tertiary research questions (tQ1-tQ5)

The assessments in DynaM-INT employ various tests potentially suitable to measure PAS. These include the following self-report instruments: Perceived Positive Appraisal Style Scale – process-focused (PASS-process) [45], Perceived Positive Appraisal Style Scale – content-focused (PASS-content) [45], self-generated questions on Crisis-related positive appraisals [63], an optimism questionnaire [48], a control questionnaire [47], and a self-efficacy questionnaire [46] (Table 5). For our tertiary research questions, we will examine their relation to stressor reactivity, target engagement, and potential changes over the study period (tQ1-5) using measurements from the relevant time points.

These questionnaires are employed in the extended questionnaire batteries administered at the baseline characterization and follow-up phases. The PASS-process and PASS-content are additionally included in the biweekly online monitoring questionnaires. A non-questionnaire test is the situation-focused volitional reappraisal fMRI task, as administered in the baseline characterization phase, which has also been employed in earlier studies, including DynaM-OBS, serving to establish the PAS construct and to test its relationship to resilience [33, 57, 68]. These earlier data sets are being used to specify the optimal PAS measure to be used in DynaM-INT before conducting PAS-related analyses in this data set.

Additional analyses

Digital biomarkers from audiovisual recordings

To obtain more objective and sensitive indicators of participants’ mental health problems, we aim to identify digital biomarkers of mental health (DBMs) from the audiovisual data derived from participants’ video-recorded interviews. The interviews are completed at four timepoints throughout the study. Using pre-trained open-source algorithms, features that represent potential DBMs, such as voice pitch, will be extracted from the recordings. Subsequently, we will use machine learning-based analyses such as feature selection to identify those features that best align with self-reported GHQ scores in a data-driven fashion. Next to convergent validity with the GHQ, we will also consider discriminant validity to other questionnaires, test–retest reliability, and consistency across multiple analysis approaches.

In a second step, we aim to combine the identified features to DBM-based P scores and use them to calculate DBM-based SR scores, which can complement the primary, fully questionnaire-based SR as an additional outcome in addressing the above hypotheses. For example, we will investigate intervention effects on DBM-based SR scores, whether the same RFs that predict questionnaire-based SR also predict DBM-based SR, and whether those RFs that are not measured via self-report questionnaires, such as fMRI task-based activation or biological data from the bio-samples, show stronger associations with the DBM-based than questionnaire-based SR scores. Next to using identified DBMs in a complementary outcome measure, we will also explore how potential DBMs relate to the main questionnaire-based SR as predictors and whether any features relate to or predict intervention success.

Discussion

With the DynaM-INT study, we are advancing the field of resilience research by investigating two different just-in-time adaptive interventions (JITAIs) that are targeted at increasing putative resilience factors. The design allows us to investigate the feasibility of just-in-time EMIs, triggered at moments of high psychological and physiological stress in real life. The multimodal baseline characterization further enables us to identify predictors for the effects of each of the interventions on stressor reactivity and target engagement. At the same time, the dense longitudinal measures allow us to investigate whether the JITAIs are followed by reductions in stressor reactivity and increases in target engagement over time. The DynaM-INT study thereby aims to inform future research about which parameters are important to consider in future studies testing the efficacy of these interventions. Moreover, the DynaM-INT study yields a rich database that can be shared with other researchers in the field of resilience research.

Availability of data and materials

Self-generated questionnaires are available at OSF [44].

Abbreviations

3T:

3 Tesla

AP:

Anterior-posterior

BOLD:

Blood oxygen level dependent

d:

Day

DBM:

Digital biomarkers of mental health derived from audiovisual recordings

DNA:

Deoxyribonucleic acid

DynaM-INT:

Dynamic Modelling of Resilience-Interventional Study

DynaM-OBS:

Dynamic Modelling of Resilience-Observational Study

E:

Experienced stressors

EMA:

Ecological momentary assessment

EMI:

Ecological momentary intervention

EPA:

Ecological physiological assessment

EPI:

Echoplanar imaging

FLAIR:

Fluid-attenuated inversion recovery

(f)MRI:

(Functional) magnetic resonance imaging

FOV:

Field of view

FRESHMO:

Frequent stressor and mental health monitoring

GHQ:

General health questionnaire

HAWIE:

Hamburg Wechsler intelligence test for adults

h:

Hour

HR:

Heart rate

IAPS:

International affective picture system

ID:

Identifier

JITAI:

Just-in-time adaptive intervention

MB:

Multiband

MID:

Monetary incentive delay task

M.I.N.I.:

Mini-International Neuropsychiatric Interview

ml:

Milliliter

M:

Month

ms:

Millisecond

NIS:

Israeli Shekel

nr:

Number

P:

Mental health problems

PA:

Posterior-anterior

PAS:

Positive appraisal style

PASS-content:

Perceived positive appraisal style scale, content-focused

PASS-process:

Perceived positive appraisal style scale, process-focused

PASTOR:

Positive appraisal style theory of resilience

PLN:

Polish Złoty

R > NR:

Contrast regulate>no regulate

RF:

Resilience or risk factor

s:

Second

SR:

Stressor reactivity

T1-MPRAGE:

Magnetization prepared rapid acquisition with gradient echoes

TE:

Echo time

TR:

Repetition time

w:

Week

References

  1. James SL, Abate D, Abate KH, Abay SM, Abbafati C, Abbasi N, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392:1789–858.

    Article  Google Scholar 

  2. Baxter AJ, Vos T, Scott KM, Ferrari AJ, Whiteford HA. The global burden of anxiety disorders in 2010. Psychol Med. 2014;44:2363–74.

    Article  PubMed  Google Scholar 

  3. Liu Q, He H, Yang J, Feng X, Zhao F, Lyu J. Changes in the global burden of depression from 1990 to 2017: Findings from the Global Burden of Disease study. J Psychiatr Res. 2020;126:134–40.

    Article  PubMed  Google Scholar 

  4. Olesen J, Gustavsson A, Svensson M, Wittchen H-U, Jönsson B, Group on behalf of the C study, et al. The economic cost of brain disorders in Europe. Eur J Neurol. 2012;19:155–62.

    Article  PubMed  Google Scholar 

  5. Plana-Ripoll O, Momen NC, McGrath JJ, Wimberley T, Brikell I, Schendel D, et al. Temporal changes in sex- and age-specific incidence profiles of mental disorders—A nationwide study from 1970 to 2016. Acta Psychiatr Scand. 2022;145:604–14.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Gustavson K, Knudsen AK, Nesvåg R, Knudsen GP, Vollset SE, Reichborn-Kjennerud T. Prevalence and stability of mental disorders among young adults: findings from a longitudinal study. BMC Psychiatry. 2018;18:65.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Jorm AF, Patten SB, Brugha TS, Mojtabai R. Has increased provision of treatment reduced the prevalence of common mental disorders? Review of the evidence from four countries. World Psychiatry. 2017;16:90–9.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Kalisch R, Baker DG, Basten U, Boks MP, Bonanno GA, Brummelman E, et al. The resilience framework as a strategy to combat stress-related disorders. Nat Hum Behav. 2017;1:784–90.

    Article  PubMed  Google Scholar 

  9. Bonanno GA, Westphal M, Mancini AD. Resilience to Loss and Potential Trauma. Annu Rev Clin Psychol. 2011;7:511–35.

    Article  PubMed  Google Scholar 

  10. Kalisch R, Müller MB, Tüscher O. A conceptual framework for the neurobiological study of resilience. Behav Brain Sci. 2015;38:e92.

    Article  PubMed  Google Scholar 

  11. Kalisch R, Cramer AOJ, Binder H, Fritz J, Leertouwer IJ, Lunansky G, et al. Deconstructing and Reconstructing Resilience: A Dynamic Network Approach. Perspect Psychol Sci. 2019;14:765–77.

    Article  PubMed  Google Scholar 

  12. Kalisch R, Köber G, Binder H, Ahrens KF, Basten U, Chmitorz A, et al. The Frequent Stressor and Mental Health Monitoring-Paradigm: A Proposal for the Operationalization and Measurement of Resilience and the Identification of Resilience Processes in Longitudinal Observational Studies. Front Psychol. 2021;12:710493.

  13. Ungar M. Resilience and Culture: The Diversity of Protective Processes and Positive Adaptation. In: Theron LC, Liebenberg L, Ungar M, editors. Youth Resilience and Culture: Commonalities and Complexities. Dordrecht: Springer, Netherlands; 2015. p. 37–48.

    Chapter  Google Scholar 

  14. Bonanno GA, Romero SA, Klein SI. The Temporal Elements of Psychological Resilience: An Integrative Framework for the Study of Individuals, Families, and Communities. Psychol Inq. 2015;26:139–69.

    Article  Google Scholar 

  15. Infurna FJ, Luthar SS. Re-evaluating the notion that resilience is commonplace: A review and distillation of directions for future research, practice, and policy. Clin Psychol Rev. 2018;65:43–56.

    Article  PubMed  Google Scholar 

  16. Joyce S, Shand F, Tighe J, Laurent SJ, Bryant RA, Harvey SB. Road to resilience: a systematic review and meta-analysis of resilience training programmes and interventions. BMJ Open. 2018;8:e017858.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Chmitorz A, Kunzler A, Helmreich I, Tüscher O, Kalisch R, Kubiak T, et al. Intervention studies to foster resilience – A systematic review and proposal for a resilience framework in future intervention studies. Clin Psychol Rev. 2018;59:78–100.

    Article  PubMed  Google Scholar 

  18. DynaMORE. https://dynamore-project.eu/. Accessed 22 Mar 2023.

  19. Reavley N, Jorm AF. Prevention and early intervention to improve mental health in higher education students: a review. Early Interv Psychiatry. 2010;4:132–42.

    Article  PubMed  Google Scholar 

  20. Eisenberg D, Hunt J, Speer N. Help Seeking for Mental Health on College Campuses: Review of Evidence and Next Steps for Research and Practice. Harv Rev Psychiatry. 2012;20:222–32.

    Article  PubMed  Google Scholar 

  21. Frajerman A, Morvan Y, Krebs M-O, Gorwood P, Chaumette B. Burnout in medical students before residency: A systematic review and meta-analysis. Eur Psychiatry. 2019;55:36–42.

    Article  PubMed  Google Scholar 

  22. Gould J. Mental health: Stressed students reach out for help. Nature. 2014;512:223–4.

    Article  PubMed  Google Scholar 

  23. Rotenstein LS, Ramos MA, Torre M, Segal JB, Peluso MJ, Guille C, et al. Prevalence of Depression, Depressive Symptoms, and Suicidal Ideation Among Medical Students. JAMA. 2016;316:2214.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Tupler LA, Hong JY, Gibori R, Blitchington TF, Krishnan KRR. Suicidal Ideation and Sex Differences in Relation to 18 Major Psychiatric Disorders in College and University Students. J Nerv Ment Dis. 2015;203:269–78.

    Article  PubMed  Google Scholar 

  25. Penninx BWJH, Benros ME, Klein RS, Vinkers CH. How COVID-19 shaped mental health: from infection to pandemic effects. Nat Med. 2022;28:2027–37.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Stelmach R, Kocher EL, Kataria I, Jackson-Morris AM, Saxena S, Nugent R. The global return on investment from preventing and treating adolescent mental disorders and suicide: a modelling study. BMJ Glob Health. 2022;7:e007759.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Cochrane R, Robertson A. The life events inventory: A measure of the relative severity of psycho-social stressors. J Psychosom Res. 1973;17:135–9.

    Article  PubMed  Google Scholar 

  28. Goldberg DP, Gater R, Sartorius N, Ustun TB, Piccinelli M, Gureje O, et al. The validity of two versions of the GHQ in the WHO study of mental illness in general health care. Psychol Med. 1997;27:191–7.

    Article  PubMed  Google Scholar 

  29. Amstadter AB, Myers JM, Kendler KS. Psychiatric resilience: longitudinal twin study. Br J Psychiatry. 2014;205:275–80.

    Article  PubMed  PubMed Central  Google Scholar 

  30. van Harmelen A-L, Kievit RA, Ioannidis K, Neufeld S, Jones PB, Bullmore E, et al. Adolescent friendships predict later resilient functioning across psychosocial domains in a healthy community cohort. Psychol Med. 2017;47:2312–22.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Marciniak MA, Shanahan L, Myin-Germeys I, Veer I, Yuen KS, Binder H, Walter H, Hermans E, Kalisch R, Kleim B. Imager – An mHealth mental imagery-based ecological momentary intervention targeting reward sensitivity: A randomized controlled trial. PsyArXiv. 2022. https://doi.org/10.31234/osf.io/jn5u4.

  32. Wang L, Miller LC. Just-in-the-Moment Adaptive Interventions (JITAI): A Meta-Analytical Review. Health Commun. 2020;35:1531–44.

    Article  PubMed  Google Scholar 

  33. Wackerhagen C, Veer IM, van Leeuwen JMC, Reppmann Z, Riepenhausen A, Bögemann SA, et al. Dynamic Modelling of Mental Resilience in Young Adults: Protocol for a Longitudinal Observational Study (DynaM-OBS) JMIR Res Protoc. 2023;12:e39817.

  34. Riepenhausen A, Wackerhagen C, Reppmann ZC, Deter H-C, Kalisch R, Veer IM, et al. Positive Cognitive Reappraisal in Stress Resilience, Mental Health, and Well-Being: A Comprehensive Systematic Review. Emot Rev. 2022;14:310–31.

    Article  Google Scholar 

  35. Sheehan DV, Lecrubier Y, Sheehan KH, Amorim P, Janavs J, Weiller E, et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J Clin Psychiatry. 1998;59:22-33-quiz 34-57.

  36. SoSci Survey. https://survey.charite.de/admin/. Accessed 22 Mar 2023.

  37. Bornstein RA. Normative data on selected neuropsychological measures from a nonclinical sample. J Clin Psychol. 1985;41:651–9.

    Article  Google Scholar 

  38. Tombaugh TN. Trail Making Test A and B: Normative data stratified by age and education. Arch Clin Neuropsychol. 2004;19:203–14.

    Article  PubMed  Google Scholar 

  39. Ryan JJ, Lopez SJ. Wechsler Adult Intelligence Scale-III. In: Dorfman WI, Hersen M, editors. Understanding Psychological Assessment. Boston, MA: Springer, US; 2001. p. 19–42.

    Chapter  Google Scholar 

  40. Fiber Optic Response Devices Home Page. https://www.curdes.com/. Accessed 22 Mar 2023.

  41. Lewis SJ, Heaton KW. Stool Form Scale as a Useful Guide to Intestinal Transit Time. Scand J Gastroenterol. 1997;32:920–4.

    Article  PubMed  Google Scholar 

  42. Bögemann S, Puhlmann L, Wackerhagen C, Zerban M, Riepenhausen A, Köber G, et al. Psychological resilience factors and their association with weekly stressor reactivity during the COVID-19 outbreak in Europe. JMIR Ment Health (forthcoming). https://doi.org/10.2196/46518.

  43. Ekman P, Friesen W. Pictures of Facial Affect. Palo Alto: Consulting Psychologists Press; 1976.

  44. Chmitorz A, Kurth K, Mey LK, Wenzel M, Lieb K, Tüscher O, et al. Assessment of Microstressors in Adults: Questionnaire Development and Ecological Validation of the Mainz Inventory of Microstressors. JMIR Ment Health. 2020;7:e14566.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Petri-Romão P, Engen H, Rupanova A, Puhlmann L, Zerban M, Neumann RJ, et al. Self-report assessment of Positive Appraisal Style (PAS): development of a process-focused and a content-focused questionnaire for use in mental health and resilience research. PsyArXiv. 2023. https://doi.org/10.31234/osf.io/fpw94.2023.

  46. Luszczynska A, Gutiérrez-Doña B, Schwarzer R. General self-efficacy in various domains of human functioning: Evidence from five countries. Int J Psychol. 2005;40:80–9.

    Article  Google Scholar 

  47. Kovaleva A. The IE-4: construction and validation of a short scale for the assessment of locus of control. In: Social Science Open Access Repository. Mannheim: GESIS; 2012.

  48. Chiesi F, Galli S, Primi C, Innocenti Borgi P, Bonacchi A. The Accuracy of the Life Orientation Test-Revised (LOT–R) in Measuring Dispositional Optimism: Evidence From Item Response Theory Analyses. J Pers Assess. 2013;95:523–9.

    Article  PubMed  Google Scholar 

  49. SEMA3. https://sema3.com/. Accessed 22 Mar 2023.

  50. RADAR-base. https://radar-base.org/. Accessed 22 Mar 2023.

  51. imec | Wereldwijde R&D hub en Vlaamse innovatiemotor. https://www.imec.be/nl. Accessed 22 Mar 2023.

  52. Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. 2016;4:e72.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Feinberg DA, Moeller S, Smith SM, Auerbach E, Ramanna S, Glasser MF, et al. Multiplexed Echo Planar Imaging for Sub-Second Whole Brain FMRI and Fast Diffusion Imaging. PLoS One. 2010;5:e15710.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Neurobehavioral Systems. https://www.neurobs.com/. Accessed 22 Mar 2023.

  55. Knutson B, Adams CM, Fong GW, Hommer D. Anticipation of Increasing Monetary Reward Selectively Recruits Nucleus Accumbens. J Neurosci. 2001;21:RC159.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Kampa M, Schick A, Sebastian A, Wessa M, Tüscher O, Kalisch R, et al. Replication of fMRI group activations in the neuroimaging battery for the Mainz Resilience Project (MARP). Neuroimage. 2020;204:116223.

    Article  PubMed  Google Scholar 

  57. Kampa M, Schick A, Yuen K, Sebastian A, Chmitorz A, Saase V, et al. A Combined Behavioral and Neuroimaging Battery to Test Positive Appraisal Style Theory of Resilience in Longitudinal Studies. bioRxiv. 2018. https://doi.org/470435.

  58. Kanske P, Heissler J, Schönfelder S, Bongers A, Wessa M. How to Regulate Emotion? Neural Networks for Reappraisal and Distraction. Cereb Cortex. 2011;21:1379–88.

    Article  PubMed  Google Scholar 

  59. Lang P, Bradley MM. The International Affective Picture System (IAPS) in the study of emotion and attention. Handb Emotion Elicitation Assess. 2007;29:70–3.

    Google Scholar 

  60. Wessa M, Kanske P, Neumeister P, Bode K, Heissler J, Schönfelder S. EmoPics: Subjektive und psychophysiologische Evaluation neuen Bildmaterials für die klinisch-bio-psychologische Forschung. Z Klin Psychol Psychother. 2010;39(Suppl. 1/11):77.

    Google Scholar 

  61. Hariri AR, Mattay VS, Tessitore A, Kolachana B, Fera F, Goldman D, et al. Serotonin Transporter Genetic Variation and the Response of the Human Amygdala. Science. 1979;2002:400–3.

  62. Wackerhagen C, Wüstenberg T, Mohnke S, Erk S, Veer IM, Kruschwitz JD, et al. Influence of Familial Risk for Depression on Cortico-Limbic Connectivity During Implicit Emotional Processing. Neuropsychopharmacology. 2017;42:1729–38.

    Article  PubMed  PubMed Central  Google Scholar 

  63. OSF | DynaM-INT - Dynamic Modelling of Resilience – Interventional Study. https://osf.io/sj6dq/. Accessed 22 Mar 2023.

  64. Marciniak MA, Shanahan L, Veer I, Walter H, Binder H, Hermans E, Timmer J, Kobylińska D, Puhlmann L, Tuescher O, Kalisch R, Kleim B. ReApp – an mHealth app increasing reappraisal: results from two randomized controlled trials. PsyArXiv. 2023. https://doi.org/10.31234/osf.io/u4f5e.

  65. Marciniak MA, Shanahan L, Binder H, Kalisch R, Kleim B. Positive prospective mental imagery characteristics in young adults and their associations with depressive symptom. Cognit Ther Res. 2023:1-12.

  66. Kunzler AM, Chmitorz A, Röthke N, Staginnus M, Schäfer SK, Stoffers-Winterling J, et al. Interventions to foster resilience in nursing staff: A systematic review and meta-analyses of pre-pandemic evidence. Int J Nurs Stud. 2022;134:104312.

    Article  PubMed  Google Scholar 

  67. Yang Q, van Stee SK. The Comparative Effectiveness of Mobile Phone Interventions in Improving Health Outcomes: Meta-Analytic Review. JMIR Mhealth Uhealth. 2019;7:e11244.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Chmitorz A, Neumann RJ, Kollmann B, Ahrens KF, Öhlschläger S, Goldbach N, et al. Longitudinal determination of resilience in humans to identify mechanisms of resilience to modern-life stressors: the longitudinal resilience assessment (LORA) study. Eur Arch Psychiatry Clin Neurosci. 2021;271:1035–51.

    Article  PubMed  Google Scholar 

  69. Sacu S, Wackerhagen C, Erk S, Romanczuk-Seiferth N, Schwarz K, Schweiger JI, Tost H, Meyer-Lindenberg A, Heinz A, Razi A, Walter H. Effective connectivity during face processing in major depression - distinguishing markers of pathology, risk, and resilience. Psychol Med. 2022;53:1–13.

  70. Roberti JW, Harrington LN, Storch EA. Further Psychometric Support for the 10-Item Version of the Perceived Stress Scale. J Coll Couns. 2006;9:135–47.

  71. Cohen S, Kamarck T, Mermelstein R. A Global Measure of Perceived Stress. J Health Soc Behav. 1983;24:385.

  72. Derogatis LR, Cleary PA. Confirmation of the dimensional structure of the scl-90: A study in construct validation. J Clin Psychol. 1977;33:981–9.

  73. Üstün TB, Chatterji S, Kostanjsek N, Rehm J, Kennedy C, Epping-Jordan J, Saxena S, von Korff M, Pull C. Developing the World Health Organization Disability Assessment Schedule 2.0. Bull World Health Organ. 2010;88:815–23.

  74. Veer IM, Riepenhausen A, Zerban M, Wackerhagen C, Puhlmann LMC, Engen H, et al. Psycho-social factors associated with mental resilience in the Corona lockdown. Transl Psychiatry. 2021;11:67.

  75. Gard DE, Gard MG, Kring AM, John OP. Anticipatory and consummatory components of the experience of pleasure: A scale development study. J Res Pers. 2006;40:1086–102.

  76. Smith BW, Dalen J, Wiggins K, Tooley E, Christopher P, Bernard J. The brief resilience scale: Assessing the ability to bounce back. Int J Behav Med. 2008;15:194–200.

  77. Carver CS. You want to measure coping but your protocol’ too long: Consider the brief cope. Int J Behav Med. 1997;4:92–100.

  78. Korner A, Czajkowska Z, Albani C, Drapeau M, Geyer M, Braehler E. Efficient and valid assessment of personality traits: population norms of a brief version of the NEO Five-Factor Inventory (NEO-FFI). Arch Psychiatr Psychother. 2015;17:21–32.

  79. Kocalevent R-D, Berg L, Beutel ME, Hinz A, Zenger M, Härter M, Nater U, Brähler E. Social support in the general population: standardization of the Oslo social support scale (OSSS-3). BMC Psychol. 2018;6:31.

  80. Maor M, Gurion B, Ben-Itzhak S, Bluvstein I. The Psychological Flexibility Questionnaire (PFQ): Development, Reliability and Validity. 2014.

  81. Olthuis J v., Watt MC, Stewart SH. Anxiety Sensitivity Index (ASI-3) subscales predict unique variance in anxiety and depressive symptoms. J Anxiety Disord 2014;28:115–24.

  82. Rizvi SJ, Quilty LC, Sproule BA, Cyriac A, Michael Bagby R, Kennedy SH. Development and validation of the Dimensional Anhedonia Rating Scale (DARS) in a community sample and individuals with major depression. Psychiatry Res. 2015;229:109–19.

  83. Teicher MH, Parigger A. The ‘Maltreatment and Abuse Chronology of Exposure’ (MACE) Scale for the Retrospective Assessment of Abuse and Neglect During Development. PLoS One. 2015;10:e0117423.

  84. Adler NE, Epel ES, Castellazzo G, Ickovics JR. Relationship of subjective and objective social status with psychological and physiological functioning: Preliminary data in healthy, White women. Health Psychol. 2000;19:586–92.

  85. Brinker JK, Dozois DJA. Ruminative thought style and depressed mood. J Clin Psychol. 2009;65:1–19.

  86. Torrubia R, Ávila C, Moltó J, Caseras X. The Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ) as a measure of Gray’s anxiety and impulsivity dimensions. Pers Individ Dif. 2001;31:837–62.

  87. Spielberger CD. State-Trait Anxiety Inventory. The Corsini Encyclopedia of Psychology Hoboken, NJ, USA: Wiley; 2010.

  88. Parker JDA, Taylor GJ, Bagby RM. The 20-Item Toronto Alexithymia Scale. J Psychosom Res. 2003;55:269–75.

  89. Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. 2016;4:e72.

  90. Garnefski N, Kraaij V. Cognitive emotion regulation questionnaire – development of a short 18-item version (CERQ-short). Pers Individ Dif. 2006;41(6):1045–53. https://doi.org/10.1016/j.paid.2006.04.010.

Download references

Acknowledgements

We thank S. Stöber and N. Donner for their help with project administration. For their help with study conduct, we thank the following people: At Charité: S. Blum, L. Do, E. Hodapp, C. Rohr, E. Rossi, C. Sachs; in Mainz: H. Fiehn, L. Knirsch, D. Laurila-Epe, A. Peschel, J. Piloth, C. Schultheis, C. Walter, M. Weber; in Nijmegen: N. Aslan, J. Posthuma, M. Schepers; in Tel Aviv: I. David, S. Berman, O. Gafni, R. Horovich, S. Ben-Dor, D. Even-Or, L. Lagziel; in Warsaw: S. Matys, N. Robak, J. Szurnicka, M. Wasylkowska.

Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation program under Grant Agreement numbers 777084 (DynaMORE project) and 101016127 (RESPOND project), Deutsche Forschungsgemeinschaft (DFG Grant CRC 1193, subprojects B01, C01, C04, Z03), the German Federal Ministry for Education and Research (BMBF) as part of the Network for University Medicine under Grant number 01KX2021 (CEOsys and EViPan projects), and the State of Rhineland-Palatinate, Germany (MARP program, DRZ program, Leibniz Institute for Resilience Research). The funding agencies had no part in study design, collection, management, analysis, and interpretation of data, writing of the report, and the decision to submit the report for publication.

Author information

Authors and Affiliations

Authors

Contributions

Drafting the manuscript: SAB, AR, LMCP, RK. Conception & study design: RK, HW, IMV, AA-V, BK, DK, EH, HB, IM-G, JT, KR, KY, MM, OT, SP, TH, WdR. Acquisition of data: SAB, AR, LMCP, SB, EJCH, ZCR, AU, MZ, JW, KY. Critical editing and revision of the manuscript: All authors. All authors have approved the submitted version of this manuscript. All authors have agreed to be personally accountable for the author’s own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.

Corresponding author

Correspondence to S. A. Bögemann.

Ethics declarations

Ethics approval and consent to participate

The study was conducted in accordance with the Declaration of Helsinki and approved by the local ethics committees of all participating sites: The ethics committee of Charité – Universitätsmedizin Berlin, Germany; the medical ethical committee of Radboud university medical center (METC Oost-Nederland), Nijmegen, The Netherlands; the ethics committee of the Tel Aviv University, Tel Aviv, Israel and the Helsinki committee of Tel Aviv Souraski Medical Center; the ethics committee of the State Medical Board of Rhineland- Palatinate, Mainz, Germany; the ethics committee for scientific research at Faculty of Psychology, University of Warsaw (Komisja Etyki Badań Naukowych Wydziału Psychologii Uniwersytetu Warszawskiego), Warsaw, Poland. All study participants provided written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests. RK has received advisory honoraria from JoyVentures, Herzlia, Israel.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

 Table S1. Beep schedule. Figure S1. Design of the different assessment weeks. Figure S2. EMA content. Table S2. Real-time features. Figure S3. Design of the reward sensitivity task. Figure S4. Design of the Situation-focused volitional reappraisal task. Figure S5. Design of the implicit emotion processing task (Faces task). Table S3. Remuneration schedule.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bögemann, S.A., Riepenhausen, A., Puhlmann, L.M.C. et al. Investigating two mobile just-in-time adaptive interventions to foster psychological resilience: research protocol of the DynaM-INT study. BMC Psychol 11, 245 (2023). https://doi.org/10.1186/s40359-023-01249-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40359-023-01249-5

Keywords