Guideline re-development included three stages: literature search, questionnaire development and Delphi consensus survey rounds.
Literature search
A systematic search was carried out to find statements about how a member of the public can help someone experiencing extreme distress following a potentially traumatic event, including what to do at the site of a potentially traumatic event, how to talk with someone following a potentially traumatic event, what to do if the person discloses abuse, how to offer short-term assistance to the person and how to assist them to seek appropriate professional help if it is needed. Online materials, research publications and books were included in the search for relevant statements. All searches were set to return results published since 2007, as the aim was to find content that had not been covered by the literature search in the original study.
Google search engines of several countries were tested in exploratory searches to identity a combination that provided the highest level of discrete results for the literature search. The search engines identified for use were: Google.com, Google.com.au, Google.co.uk, Google.co.nz, and, Google.ca. The use of additional Google search engines from other countries (e.g. Denmark, Ireland, Sweden and the Netherlands) was found to produce a high level of duplicate results so these search engines were not used. The search terms entered were: “help* trauma past OR present OR current OR experiencing OR friend OR family OR someone”. These terms were chosen because they delivered more relevant results than other combinations. The original study used a single search term “traumatic event”. Using the term ‘trauma’ and excluding the term ‘event’ produced results looking at a broader range of potentially traumatic events. Searches were conducted in private or incognito mode to minimise the influence of Google’s search algorithms. The search settings were adjusted each time to reflect the country of the search engine. Based on previous similar Delphi studies [27], websites appearing in the top 50 results from each search were reviewed. Previous studies have found that the quality of the resources declines after the first 50 results [28]. Overall, 250 websites were reviewed for potential first aid helping actions, duplicate sites were deleted, and relevant statements were found on 32 of these sites. Any links on these websites which the authors thought may contain useful information were followed. A total of 34 online sources resulted, all of which contained relevant statements.
In addition, PsycInfo and PubMed were used to run a title search with the terms ‘trauma’, ‘post traumatic stress’ ‘PTS’ (truncated to include terms such as ‘PTS’ and ‘PTSD’), ‘stress’ (truncated to include terms such as ‘distress’ and ‘stressed’), ‘support’, ‘help’ (truncated as above), ‘assist’, ‘first aid’. The term ‘life support’ was excluded to ensure the return of relevant results. In contrast, the single search term ‘trauma*’ was used in the original study, as attempts to narrow the search had been found to exclude too many relevant results. Searches on these databases returned a total of 2728 results. Any duplicates were deleted and the remaining articles were then screened for relevance. Following this process, 26 articles were deemed relevant. One further article identified in the web search was added for review, resulting in a total of 27 articles. The irrelevant articles were excluded through a tiered screening process, beginning with titles, abstracts and then a full-text review. Of the articles read, ten contained relevant statements.
To locate relevant books, an advanced search of Amazon.com was conducted using the terms ‘trauma’ (truncated as above), ‘help’ and ‘friend’. As in the original study, the Amazon website was chosen because of its extensive coverage of books that included works about mental health aimed at the public. The search differed from the original study, principally with the inclusion of ‘help’ and ‘friend’ as search terms, because this produced the most relevant results when tested. The search returned 39 books, with five books considered relevant. These five books were read, with relevant statements found in one of these books. Irrelevant results included books that were autobiographical in nature, self-help workbooks and clinical manuals.
Existing interventions for responding to traumatic events were also reviewed where possible. Course materials obtained were most commonly aimed at professionals or were focused on policies and procedures recommended for organisations.
Questionnaire development
Statements derived from the literature search were written up as individual questionnaire items. The first questionnaire was made up of these items, as well as statements from the previous Delphi questionnaires on trauma. Statements included from the previous Delphi were those that were endorsed for the original guidelines, as well as those that were endorsed by 50% or more of both the original panels. These items were reviewed in light of the updated definition of mental health first aid and the first aider’s role, as well as for comprehensibility. Some items were reworded to capture new actions suggested by the literature search or to make them clearer.
In the current study, statements were considered acceptable for inclusion in the questionnaire if the authors agreed that they described how someone could help a person experiencing extreme distress following a potentially traumatic event. Items were required to be relevant to the role of first aider and actionable, as well as being clear in meaning. Adhering to this criteria reduced researcher bias in the selection process. Researchers did not judge the items as this is the role of the experts. Examples of the types of statements included in the questionnaire include:
The first aider should communicate with the person as an equal, rather than as a superior or expert.
If the person experiences flashbacks, the first aider should ask the person how they wish to be supported when these occur.
Statements were sorted into thematic categories and similar statements were edited to reduce repetition. A working group comprising the authors reviewed this content and edited it to improve clarity, for example by re-wording statements and adding examples. The working group were all researchers with expertise in Delphi methodology and MHFA training programmes. Please see Additional file 1 for a copy of the Round 1 survey.
Delphi consensus survey rounds
The Delphi method [25] used involved identifying and recruiting panels of experts in the field of psychological trauma. The expert panels then completed online questionnaires, rating each statement according to how important it was that the item be included in the guidelines. A 5-point Likert scale was used (‘essential’, ‘important’, ‘don’t know/depends’, ‘unimportant’ or ‘should not be included’). Statements that achieved substantial consensus as being ‘essential’ or ‘important’ amongst the panellists (80% or more of the members in both panels) were considered to be recommended actions for helping someone after a potentially traumatic event. Questionnaires were presented to panellists via a survey website, Survey Monkey, in three sequential rounds.
Panel recruitment
Participants were recruited from high-income, western countries (including Australia, Canada, Ireland, New Zealand, United Kingdom, the United States of America and countries throughout Europe) to join expert panels representing two areas of expertise: consumers (those with lived experience) or professionals. Participant recruitment was restricted to high-income, western countries with similar health care systems because experts from countries that fall outside this criteria approach mental health first aid differently [29]. Many countries require guidelines tailored specifically for their local context. Some of the authors are currently involved in developing similar guidelines for a number of middle-income countries [22].
Panellists were required to have professional experience working in the field of psychological trauma (i.e. as a researcher, clinician, mental health worker), or personal experience with extreme distress following a potentially traumatic event. Prospective professional panellists were identified as experts through their involvement with mental health organisations or professional bodies, while consumer panellists were identified through their advocacy roles.
The 2008 study that developed the original guidelines had aimed to recruit three panels: mental health professionals, consumers and carers [21]. However, recruitment of carers was difficult and there were not enough carers to form their own panel. The carers that were recruited also had lived experience as consumers and thus they were included in the consumer panel instead. Due to these difficulties, as well as challenges in recruitment in more recent Delphi studies [24], it was decided that there would not be a carer panel in the present study.
Professional panellists were recruited through editorial boards of relevant academic journals, mental health advocacy organisations and professional bodies. Offices of organisations providing MHFA in relevant countries were also a source of recruitment. Researchers with published work in the field were directly invited to participate via email and all professionals were also asked to nominate any colleagues who they felt would be appropriate panel members.
Consumer Advocate panellists were recruited through mental health advocacy organisations, including Beyond Blue (Australia), Canadian Mental Health Association, Shine (Ireland), Mental Health Foundation of New Zealand, Scottish Mental Health Welfare Commission, Föreningen Hjärnkoll (Sweden) and Grow in America. Other advocates invited to participate in the consumer panel included public speakers and authors of websites or books offering support and information to those with lived experience of trauma, as well as promoting recovery after a potentially traumatic event. Experts were also asked to nominate anyone else they knew who they felt would be appropriate panel members. The recruitment criteria described ensured that researcher bias was reduced during the panel recruitment stage.
Participants represented global professions in psychological trauma and resided in 10 different countries (see Additional file 5 for further demographics). Approximately half of all participants (54%) had another source of expertise on trauma in addition to their identified expertise, e.g. lived experience or carer experience as well as professional experience. It was not a requirement that participants had prior knowledge on MHFA training courses. Participants were given a brief introduction to how the guidelines would be applied to training courses prior to completing the first survey.
Survey rounds
Participants rated the statements included in each questionnaire based on how important they judged them to be in relation to the role of the first aider and the aims of mental health first aid (participant instructions are included in the Additional files 1, 2 and 3).
Questionnaires were analysed using predetermined criteria. This categorised statements into one of three groups that determined whether a statement was endorsed for the guidelines, rejected or re-rated in the following round:
-
1.
Endorsed: statements rated as ‘essential’ or ‘important’ by 80% or more of the members in both panels.
-
2.
Re-rate: statements rated as ‘essential’ or ‘important’ by 70–79.9% of both of the panels, or items rated as ‘essential’ or ‘important’ by 80% or more of one panel, and between 70 and 70.9% by the other panel.
-
3.
Rejected: all other statements were excluded.
In Round 1, panel members were also asked to provide open-ended feedback after each section of the questionnaire. This allowed panellists to suggest helping actions that were not included in the first questionnaire. The authors reviewed this feedback and suggestions that contained original ideas were used to develop new helping statements to be included in the Round 2 questionnaire. Any statements that received feedback suggesting uncertainty in the interpretation of its meaning were re-phrased to make them unambiguous. These were included in the Round 2 questionnaire along with the statements from Round 1 that met the criteria to be re-rated.
The final questionnaire (Round 3) included items that were developed from Round 1 feedback and included for the first time in Round 2, but required re-rating in an additional round. Items that still did not achieve consensus in the final round were rejected from inclusion in the guidelines. See Additional files 1, 2 and 3 for copies of the three surveys.
Following each round, panellists were sent individualised reports containing a summary of the results. The report consisted of a list of statements that had been endorsed for guidelines inclusion, a list of statements that had been rejected, and a list of statements that were to be re-rated in the next survey round. Each report was tailored to include the individual panellist’s rating for each statement, alongside a summary of each panel’s ratings for the statement. This allowed panel members to compare their response to that of the two groups and decide whether to maintain or modify their ratings in the next survey round.
The statements that were endorsed across the three survey rounds were compiled into thematic sections to form the draft guidelines. To improve comprehensibility, statements were re-written as an integrated text and any repetition was deleted. The authors met to finalise structure and wording. The draft of the guidelines was then disseminated to panellists for their final comment and endorsement. At this stage panellists could not suggest new content; however, they were able to provide feedback to improve clarity and reduce ambiguity.
Ethical considerations
The University of Melbourne Human Research Ethics Committee approved this research. Informed consent was obtained from all participants by clicking ‘yes’ to a question about informed consent in the Round 1 survey.