This Delphi study was conducted in four steps: (1) recruit expert panel members (participants), (2) conduct literature search and develop survey, (3) collect and analyse data and (4) re-develop the 2008 guidelines.
Step 1: recruit expert panels
People from high-income countries that have licenced the Mental Health First Aid program (Australia, Canada, Denmark, England, Finland, Ireland, The Netherlands, New Zealand, Northern Ireland, Scotland, Sweden, The United States and Wales) were invited to join one of three expert panels: Consumer, Carer or Professional. Researchers aimed to recruit at least 30 participants to each panel to allow for attrition and produce stable results [17].
Participants were recruited by sending a flyer to Australian and international networks, instructors associated with MHFA Australia, and to Australian and international mental health promotion and professional organisations, peak bodies, and advocacy and carer groups. Participants were asked to pass the flyer on to anyone they thought might be interested in participating.
As per previous Delphi studies (e.g. [19]), participants had to be 18 years or older. The specific expert panel selection criteria were:
-
Consumer panel – Have a lived experience of depression with the depression being currently well managed AND be involved in activities that expose the participant to a broader experience of depression, e.g. advisory or advocacy group, peer support, etc.
-
Carer panel – Have experience in providing day-to-day support to someone with depression AND be involved in activities that expose the participant to a broader experience of depression, e.g. advisory or advocacy group, peer support, etc.
-
Professional panel – have at least 2 years’ experience as a mental health professional or researcher in the field of depression.
Step 2: literature search and survey development
The first author conducted a literature search of both the ‘grey’ and academic literature in May 2016 to gather statements about how to provide mental health first aid to a person with depression. The literature search was conducted using Google Australia, Google USA, Google UK, Google Books and Google Scholar. Google Scholar was the only academic search engine used because it has a much broader interdisciplinary coverage than other databases and also covers grey academic literature. Our previous experience has been that searches of other databases covering research and professional literature rarely produce information relevant to lay mental health first aid strategies. The key search terms were ‘depression’, ‘clinical depression’, ‘major depressive disorder’, ‘depression carers’, ‘support depression sufferers’ and ‘help depression’. These terms were the terms used in the original Delphi study [20]. The following terms were also included:
-
‘how to help someone with depression’ - generated because this is likely the phrasing a member of the public would use
-
‘major depressive episode’ - generated because this is the term used in DSM 5 diagnostic criteria
-
‘first aid for depression’ - generated because applying the concept of first aid for mental health problems is a more common concept than it was at the time of the first Delphi study.
Based on previous similar Delphi studies [18], the first 50 websites, journal articles and books for each of the search terms were retrieved and reviewed for relevant information. The decision to only examine the first 50 websites, books and journal articles for each search term is based on previous Delphi studies that found that the quality of the resources declined rapidly after the first 50 [21].
In order to minimise the influence of Google’s searching algorithms the following steps were taken: signing out of any Google profiles, clearing the search history, disabling location features and deselecting ‘any country’. Links appearing in the websites were reviewed. Websites, articles and books were excluded if they were a duplicate, did not contain information about mental health first aid or were published before the date of the previous Delphi literature search (2007). The content from 137 websites, 19 books and one journal article were analysed to develop the survey with helping statements collated from these sources and reviewed by the research team to ensure that consistent, simple language was used. Figure 1 summarises the literature search results.
The first author extracted the information from the articles, websites and books and drafted survey items. The research team reviewed the original extracted text and the drafted survey items to finalise them (see Fig. 2 for examples). The survey was administered via SurveyMonkey. Participants rated the survey items, “using a 5-point Likert scale (‘essential’, ‘important’, ‘don’t know/depends’, ‘unimportant’ or ‘should not be included’), according to whether or not they should be included in the guidelines” [22].
Step 3: data collection and analysis
Between March 2017 and April 2018, data were collected over three rounds of a survey. The Round 1 survey included the survey items developed using the literature search described above and open-ended questions asking for participant comments or suggested new items. The Round 2 survey consisted of these new items and any items needing to be re-rated because they did not receive clear consensus (see point 2 below). The Round 3 survey consisted of items that were new in Round 2 that did not receive clear consensus. See Additional file 1 for copies of the 3 survey rounds.
After participants completed a survey round, the survey items were categorised as follows:
-
1.
Endorsed. The item received an ‘essential’ or ‘important’ rating from at least 80% of participants from each of the panels.
-
2.
Re-rate. The item received an ‘essential’ or ‘important’ rating from 70 to 79% of participants from each of the panels or 80% or more from at least one panel and 70–79% from the remaining panels.
-
3.
Rejected. Item did not meet the criteria to be endorsed or re-rated.
If a re-rated item did not receive an ‘essential’ or ‘important’ rating from 80% or more of participants in each of the panels, it was rejected.
The comments collected in Round 1 were analysed by the working group to develop new items that were not included in the Round 1 survey.
Participants were given a report of Round 1 and 2 responses that included the items that were endorsed, rejected, and the ones that needed to be re-rated in the next Round. For each item that needed to be re-rated, the report included each panel’s percentages for each rating (i.e. “essential”, “important”, etc) and the participant’s individual score. Participants could use this report to compare their ratings with each panel’s ratings and decide if they wanted to change their rating score.
Step 4: re-develop the 2008 guidelines
The first author wrote the endorsed items into a guidelines document, combining survey items and deleting repetition as needed. However, the original wording was retained as much as possible. Examples and explanatory notes were used for clarification of items. The working group reviewed this draft and it was given to participants for final comment and endorsement.
Ethics, consent and permissions
This research was approved by the University of Melbourne Human Ethics Committee (ID#1648030). Informed consent, including permission to report individual participant’s de-identified qualitative data, was obtained from all participants by clicking ‘yes’ to a question about informed consent in the Round 1 survey.