Skip to main content

Validation study of the Functional Assessment of Cancer Therapy-Cognitive Function-Version 3 for the Portuguese population

Abstract

Background

Cancer-related cognitive impairment is a common and potentially debilitating symptom experienced by patients with non-central nervous system (CNS) cancers, with negative impact on their quality of life. The Functional Assessment of Cancer Therapy-Cognitive Function-Version 3 (FACT-Cog-v3) is the most extensively used instrument specifically developed to evaluate cognitive complaints in adult cancer patients. Nevertheless, this self-report measure is not yet validated for the Portuguese population. Therefore, the purpose of this study was to evaluate the psychometric properties of the FACT-Cog-v3 among patients with non-CNS cancers in Portugal.

Methods

The validation study was conducted based on a convenience sample of 281 patients with non-CNS cancers, aged between 18 and 65 years, recruited online. A confirmatory factor analysis (CFA) was used to test the factor structure of the Portuguese FACT-Cog-v3 version; internal consistency analysis was also conducted. The European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core-30 (EORTC QLQ-C30–version 3) and the Hospital Anxiety and Depression Scale (HADS) were also used to test the concurrent, convergent, and discriminant validity of the scale.

Results

CFA supported a four-factor model with good fix indexes and internal consistencies: perceived cognitive impairments (α = 0.97), comments from others (α = 0.92), perceived cognitive abilities (α = 0.93), and impact on quality of life (α = 0.92). Concurrent, convergent, and discriminant validities were confirmed. Moderate and strong correlations were found between the FACT-Cog-v3 subscales and the QLQ-C30 cognitive functioning subscale. Good convergent validity, with moderate correlations, was found between the FACT-Cog-v3 subscales and the HADS-A, HADS-D, and QLQ-C30 fatigue, sleep disturbance, and global health status subscales. Acceptable discriminant validity, with weak and moderate correlations, was demonstrated between the FACT-Cog-v3 subscales and the QLQ-C30 pain and nausea/vomiting subscales.

Conclusions

The Portuguese FACT-Cog-v3 version can be considered a reliable and valid measure to assess cognitive concerns of patients with non-CNS cancers, with relevance for research and clinical practice.

Peer Review reports

Background

Cancer-related cognitive impairment (CRCI) refers to cognitive problems related with cancer and cancer treatments, and is commonly experienced by patients throughout the disease trajectory [1, 2]. Although subtle, problems with short-term and working memory, attention, processing speed, and executive functions can have a significant impact on various domains of the quality of life (QoL) of patients, including work and social life [3]. Given the consequences of CRCI and its high prevalence (from 22 to 41% in patients with non-central nervous system (CNS) cancers compared to healthy patients; [4]), the identification of individuals with CRCI is necessary to guarantee adequate supportive care to those who need it [1].

Cognitive function can be assessed by formal neuropsychological tests (objective cognitive function) and subjective assessments (perceived/subjective/self-reported cognitive function, indifferently named in this study as perceived cognitive functioning or PCF) [5,6,7]. Both objective and PCF are important outcomes in research and clinical practice. Traditionally, formal neuropsychological tests have been viewed as the “gold standard” measure of cognitive function [5], detecting subtle impairments in non-clinical populations [8]. However, these tests can be burdensome for patients and researchers and may not be sensitive to detect subtle changes in cancer patients [8, 9]. Subjective assessment, through the administration of self-report questionnaires, can be a more practical approach, being effective and valid to measure patients’ PCF [7, 9, 10]. Although subjective assessment shows a limited correlation with neuropsychological evaluation (see [5, 6] for several factors that may contribute to this difference), it is clinically very useful to understand patient distress, perception of cognitive functioning, and to identify patients with subtle deficits who may benefit from a neuropsychological assessment and/or close monitoring [10]. Thus, some authors advocate that subjective assessment is even more relevant than neuropsychological tests [11]. Furthermore, previous systematic reviews verified a moderate to strong association between self-reported cognitive symptoms and patient reported-outcomes, such as anxiety, depression, fatigue, and lower health status [2, 5].

Most PCF questionnaires were not developed for and have not been properly validated with cancer patients [5, 7]. Two measures have been most commonly used in the literature to assess PCF in cancer patients, namely the cognitive functioning subscale of the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core-30 (EORTC QLQ-C30-version 3, briefly QLQ-C30) [12] and the Functional Assessment of Cancer Therapy-Cognitive Function-Version 3 (FACT-Cog-v3) [7]. The QLQ-C30 is a measure of QoL and comprises only two items assessing cognitive function, namely memory and concentration. Therefore, considering that this questionnaire does not assess other cognitive domains and does not provide additional information on the impact of the cognitive changes on QoL, its use as a unique indicator of PCF may result in an underestimation of the extent of cognitive difficulties [5]. Consequently, a more comprehensive and multi-dimensional measure is potentially more valid [5, 10]. In this context, the FACT-Cog-v3 [7] is one of the most well-known and the most commonly used instruments [13], both in research and in clinical settings, specifically developed to evaluate cognitive complaints in adult cancer patients [1, 5].

The FACT-Cog-v3 is a relatively brief measure and seems to be one of the most promising self-report instruments to evaluate these specific concerns, incorporating multiple dimensions such as perceived cognitive impairments (CogPCI), comments from others (CogOth), perceived cognitive abilities (CogPCA), and impact on quality of life (CogQoL). This scale was originally developed in English [7] and has been widely administered across clinical settings and validated across different cultures and languages, including French [14], Chinese [15], Korean [16], Japanese [17], Turkish [18], and English [9, 19]. These validation studies have shown good psychometric qualities, including reliability, validity, and demonstration of cross-cultural adequacy.

To our knowledge, in Portugal, there are no validated comprehensive scales to assess PCF among cancer patients. The Functional Assessment of Chronic Illness Therapy (FACIT) team, who developed the FACT-Cog-v3, has also developed other health outcomes measures specific for cancer patients. Among those is the Functional Assessment of Cancer Therapy-General (FACT-G), which has been adapted and validated to the Portuguese population [20]. However, this instrument only assesses quality of life in cancer patients and does not provide any measure of PCF. Therefore, the aim of this study was to evaluate the psychometric properties of the FACT-Cog-v3 among patients with non-CNS cancers in Portugal. The factor structure and internal consistency of this version were explored. Furthermore, the relationship between the FACT-Cog-v3 and theoretically related constructs was examined to determine the concurrent, convergent, and discriminant validity of the measure.

Methods

Participants

A convenience sample of 281 patients with non-CNS cancers completed the FACT-Cog-v3 online. The inclusion criteria were: (1) age between 18 and 65 years old; (2) diagnosis of non-CNS cancer; (3) undergoing or having received treatments for cancer; (4) ability to read and write Portuguese; and (5) Portuguese nationality. Patients with (1) psychiatric or communication disorders, and/or other serious medical condition; (2) CNS metastasis; and (3) diagnosis of dementia, epilepsy, brain injury (stroke, head injury), and drug or alcohol abuse, were excluded since these conditions might impact on cognitive functioning. Of the total sample, 266 participants additionally filled out the QLQ-C30 and, of those, 258 participants also filled out the Hospital Anxiety and Depression Scale (HADS) (see Measures section for a description of these instruments).

Procedure

Volunteer cancer patients were recruited through online advertisement disseminated across Portugal. An online survey (LimeSurvey®) located on a server from the University of Aveiro was used to collect data from participants. Participants were a self-selected sample who replied to advertisements posted on social media (Facebook), specifically in support groups, blogs/forums, cancer-related information groups, and pages of national cancer associations that accepted to collaborate in the dissemination of the study, targeting Portuguese adult cancer patients; national cancer associations were also invited to collaborate in disseminating information about the study by e-mail to their associates. Advertisements invited potential participants to access a link to the survey. Those who clicked on the link were then given detailed information about the study’s goals, inclusion criteria, and ethical statements. Participants were informed that their participation was voluntary and confidentiality of the data was ensured. Cancer patients who agreed to the study conditions provided their informed consent by clicking on the “Yes” option to the question “Do you accept to participate in this study?”. The survey was open for four months, between January and April 2021. The protocol took approximately 30 min to complete. Participants’ ethical treatment was safeguarded, in accordance with the Declaration of Helsinki [21] and the guidelines of the American Psychological Association [22]. The Ethics and Deontology Committee of the University of Aveiro (22 January 2020/ No. 30/2019) approved all the procedures of this study.

Measures

Participants completed a global self-report questionnaire assessing sociodemographic (e.g., age, education, occupation) and clinical variables (e.g., cancer diagnosis, treatments, brain injuries).

The version 3 of the FACT-Cog [7] used in this study was translated into universal Portuguese by the FACIT team, using an iterative methodology [23, 24]. For the present study, authorization was requested from FACIT to test its psychometric properties. Figure 1 presents an overview of the translation process performed by FACIT as well as a schematic representation of the validation process described in the present article.

Fig. 1
figure 1

Overview of the process of translation and validation of the Portuguese version of the Functional Assessment of Cancer Therapy-Cognitive Function-Version 3 (FACT-Cog-v3)

The FACT-Cog-v3 is a 37-item self-response measure to assess cognitive concerns of cancer patients, consisting of four subscales. For CogPCI (20 items; 0–80) and CogOth (4 items; 0–16) items, the patient indicates how often the situation occurred during the last 7 days, on a 5-point Likert scale (“0 = Never” to “4 = Several times a day”); and for CogPCA (9 items; 0–36) and CogQoL (4 items; 0–16), a 5-point Likert scale (“0 = Not at all” to “4 = Very much”) is used to indicate the severity of each situation taking into account the last week. Although two items of CogPCI and two items of CogPCA are not currently scored under the FACT-Cog-v3 scoring algorithm, according to FACIT, they may be included if some additional analyses (i.e., internal consistency and individual item-total correlation coefficients) are conducted to confirm that the items have a good fit with the scale. Therefore, the 37 items were used in this study to test its psychometric properties [19]. Except for the CogPCA subscale, negatively worded items are reverse scored prior to summing all the items. Higher scores indicate better PCF and better QoL. The reliability and validity of these scores have been established [14, 16], including the preliminary evaluation of the Portuguese version that revealed good psychometric properties regarding reliability and concurrent and convergent validity [25].

The QLQ-C30 [12, 26] is a 30-item self-response questionnaire that was used to assess health-related QoL. This scale includes a global health status/QoL subscale, functional and symptom subscales, and single items. Each of the items is scored on a 4-point Likert scale (“1 = Not at all” to “4 = Very much”), except the items of the global health/QoL subscale (modified 7-point linear analogue scale). The scores for each subscale range from 0 to 100, with higher scores for functional scales and global health/QoL representing better functioning and QoL, while higher scores in the symptom subscales and single items are indicative of worse symptoms. Of interest in this study was the cognitive functioning, global health/QoL, fatigue, and sleep disturbance subscales. Good psychometric properties were found on the Portuguese validation study [26]. In this study, the subscales used have shown acceptable Cronbach’s alpha: Cognitive Functioning = 0.79, Fatigue = 0.88, Pain = 0.88, Nausea/Vomiting = 0.70, and Global Health Status/QoL = 0.91.

This study included use of the HADS [27, 28], a 14-item self-response questionnaire, useful in recognizing emotional components of physical illness. The HADS consists of two subscales, each with seven items, one measuring anxiety (HADS-A) and one measuring depression (HADS-D); these items are answered on a 4-point Likert scale. Each subscale has a score ranging 0–21 points; higher scores indicate a higher level of anxious and depressive symptoms. Good psychometric properties were found on the Portuguese validation study [28]. In this study, Cronbach’s alpha was acceptable (0.86) for both subscales.

Statistical analysis

Statistical analyses were performed with the Statistical Package for the Social Sciences (IBM SPSS, version 28.0; IBM SPSS, Inc., Chicago, IL) and with the lavaan package for R [29, 30].

Descriptive statistics were first calculated for sample’s demographic and clinical characteristics. Measurement characteristics, i.e., mean scores, standard deviations (SD), and range, are presented for each subscale.

Reliability, through internal consistency, was measured using the following techniques and cut-off recommendations: mean of the inter-item correlation (adequate if > 0.30), corrected item-total correlation (adequate if > 0.50) [31], and Cronbach’s alpha (acceptable if > 0.70 and high if > 0.90) [32,33,34].

To test criterion validity of the scale, concurrent validity was established via correlation coefficients between the scores of the FACT-Cog-v3 and the QLQ-C30 cognitive functioning subscale.

Construct validity was determined by factorial, convergent, and discriminant validity. Confirmatory factor analysis (CFA) was used to test the hypothesis that the construct of PCF, as assessed by the FACT-Cog-v3, is composed of four separate factors of CogPCI, CogOth, CogPCA, and CogQoL [7]. Mardia’s Test was performed to assess the multivariate normality of the sample. Regarding sample size requirements for CFA, rules-of-thumb vary from five to 10 subjects per variable, including a minimum of 100 subjects [34] or a range of 200–300 individuals [35, 36]. A CFA using weighted least squares with mean and variance adjustment (WLSMV) estimator was conducted. We considered the following goodness-of-fit indices and respective cut-off recommendations for good adjustment [31, 37,38,39,40]: Chi-Square (χ2); Comparative Fit Index (CFI; 0.90 ≤ CFI ≤ 0.95); Tucker-Lewis Index (TLI; 0.90 ≤ TLI ≤ 0.95); Root Mean Square Error of Approximation (RMSEA; 0.05 ≤ RMSEA ≤ 0.08); and Standardized Root Mean Square Residual (SRMR ≤ 0.08). Local model fit was assessed through the items’ standardized factor loadings (λ ≥ 0.50) and individual reliability (R2 ≥ 0.25) [31, 40].

Convergent and discriminant validity were assessed using the Fornell and Larcker [41] criterion and by correlations with external criteria. Convergent validity of the measurement model can be assessed by the average variance extracted (AVE; AVE ≥ 0.50) and construct reliability (CR) for each factor (CR ≥ 0.70) [41], and discriminant validity is supported when the AVE for a construct is greater than the squared interconstruct correlations [31]. Convergent validity was also assessed by examining the correlations between FACT-Cog-v3 subscales and HADS and QLQ-C30 fatigue, sleep disturbance, and global health status subscales. Discriminant validity was further examined through the correlation between FACT-Cog-v3 subscales and QLQ-C30 pain and nausea/vomiting subscales.

Following the guidelines presented by Ratner [42], the correlations were classified as weak (0–0.3), moderate (0.3–0.7), and strong (> 0.7–1.0).

All significance tests were conducted using a significance level of p < 0.05.

Results

Participants

The demographic and clinical characteristics of the sample are displayed in Table 1. Cancer patients were 18–65 years-old and the mean age was 45.97 years. The most frequently reported cancer diagnosis was breast cancer (62.7%), followed by Hodgkin lymphoma and colorectal cancer (both 6.0%). More than half of the cancers (68.3%) were diagnosed during the last 5 years. More than 80% of the sample had undergone surgery (82.9%) and chemotherapy (80.8%). Presently, 57.3% of the sample has completed the treatments, while 29.9% are still receiving hormone therapy.

Table 1 Sociodemographic and clinical characteristics of the sample (N = 281)

Description of the Portuguese FACT-Cog-v3

Means, SDs, and range for the Portuguese version of the FACT-Cog-v3 items and subscales are presented in Table 2. The lowest score emerged for CogPCH2 (M = 1.21, SD = 1.13) and the highest score for CogM9 (M = 3.54, SD = 0.86). The mean scores of the four subscales were 47.56 (SD = 20.47), 13.64 (SD = 3.54), 16.33 (SD = 7.70), and 8.64 (SD = 4.51) for CogPCI, CogOth, CogPCA, and CogQOL, respectively.

Table 2 FACT-Cog-v3 subscales means and standard deviations, range, item-total correlations, and Cronbach’s alphas (N = 281)

Factor validity

Mardia’s Test showed that data is not multivariate normal, g1p = 250.14, χSkew = 11,714.97, p < 0.001; g2p = 1351.46, ZKurtosis = 34.26, p < 0.001; χSMSkew = 11,847.45, p < 0.001. The achieved sample size was enough to ensure stability of a factor solution.

A CFA with WLSMV was used to confirm the four-factor structure of the scale. Results revealed a good global adjustment, χ2(623) = 1096.48; CFI = 0.903; TLI = 0.897; RMSEA = 0.052, RMSEA 90% CI[0.047, 0.057]; SRMR = 0.055. Moreover, all items reached high factor weights and appropriate individual reliabilities on latent variables. The structural model that was tested using CFA and the resultant factor loadings and correlations are displayed in Fig. 2. Factor analysis of the 33-item FACT-Cog-v3 revealed a similar pattern (see Additional file 1).

Fig. 2
figure 2

Diagram of four-factor structure (37 items) obtained using CFA with WLSMV estimator

Reliability

For FACT-Cog-v3 dimensions of CogPCI, CogOth, CogPCA, and CogQoL, adequate mean inter-item correlations were obtained, 0.604, 0.755, 0.608, and 0.749, respectively, indicating that items in the same factor must be assessing the same construct. Regarding corrected item-total correlations, for each dimension, all items showed adequate item-total correlations (ranging between 0.464 and 0.867), indicating that all items are well correlated with the corresponding dimension. Cronbach’s alpha coefficient for FACT-Cog-v3 subscales were 0.97 for CogPCI, 0.92 for CogOth, 0.93 for CogPCA, and 0.92 for CogQoL, indicating high reliability. None of the items would substantially affect reliability if they were deleted, since all values are around the Cronbach’s alpha for each subscale. Internal consistency estimates for the FACT-Cog-v3 subscales are displayed in Table 2.

Concurrent validity

Spearman’s correlations were calculated between the FACT-Cog-v3 subscales scores and the QLQ-C30 cognitive functioning subscale to establish concurrent validity (Table 3). All FACT-Cog-v3 subscales scores correlated positively (moderate and strong correlations) with QLQ-C30 cognitive functioning subscale.

Table 3 FACT-Cog-v3 Spearman’s correlations with cognitive functioning, anxiety, depression, fatigue, sleep disturbance, global health status, pain, and nausea/vomiting scores

Convergent and discriminant validity assessed by the Fornnel and Larcker method

According to the Fornell and Larcker [41] testing system, indicators of convergent validity showed good results: CR of the factors revealed adequate results, with 0.98, 0.95, 0.96, and 0.96 for CogPCI, CogOth, CogPCA, and CogQoL, respectively; and AVE showed adequate values (AVECogPCI = 0.63, AVECogOth = 0.74, AVECogPCA = 0.62, AVECogQoL = 0.75).

Discriminant validity was assessed by comparison of AVEs with the square of the correlation between factors. All factors have discriminant validity, as AVE values were above the square of the correlation between the factors, except between CogPCI and CogPCA (Table 4). Further analyses were conducted to examine if the two factors should be maintained as separate dimensions. As we can see in Table 3, when we conduct partial correlations between CogPCI and the QLQ-C30 global health status controlling for CogPCA, results are not significant; contrarily, between CogPCA and the same QLQ-C30 subscale controlling for CogPCI, the results are significant.

Table 4 Discriminant validity results—Fornell and Larcker criterion

Convergent and discriminant validity by external criteria

Convergent validity was examined through Spearman’s correlation between FACT-Cog-v3 subscales and HADS and QLQ-C30 fatigue, sleep disturbance, and global health status subscales (Table 3). All FACT-Cog-v3 scores correlated negatively (moderate correlations) with HADS-A and HADS-D and with QLQ-C30 fatigue and sleep disturbance subscales and correlated positively (moderate correlations) with QLQ-C30 global health status subscale.

For discriminant validity, correlations between all FACT-Cog-v3 scores and QLQ-C30 pain and nausea/vomiting subscales were obtained, showing negative correlations (weak and moderate for pain and weak for nausea/vomiting) (Table 3).

Discussion

The main goal of the present study was to provide evidence of the reliability and validity of the Portuguese version of the FACT-Cog-v3, thus making available an instrument that assesses PCF to the Portuguese cancer population. Our results demonstrated that the FACT-Cog-v3 is a reliable and valid measure of CRCI among patients with non-CNS cancers in Portugal.

In line with recent recommendations arising from the positive results of Koch et al. [19] study and FACIT scoring instructions, this study used the full 37-item scale, including the additional multitasking items. The findings of the CFA showed a good fit between the hypothesized model and the observed data, as well as acceptable loadings. Thus, all items measuring the factors support the four-factor structure for the FACT-Cog-v3 scale, consistent with the CogPCI, CogQoL, CogOth, and CogPCA subscales proposed by the original authors [7] and other language validations [16, 17]. The results also confirmed that the additional multitasking items load with the expected subscales, as proposed by the original authors [7]. Considering the positive results obtained with the 37 items, this study supports the use of the full scale in research and clinical practice to gain a comprehensive understanding of PCF [19]. We should also note that good psychometric findings were obtained with the 33-item version, and conclude that both Portuguese versions are valid. Thus, each user can opt for the version that best fits their purpose. Moreover, this validation study was conducted with patients with non-CNS cancers, rather than with breast cancer patients only, as most studies previously did [15,16,17], providing support to the robustness and stability of the instrument’s multidimensional structure, which is transversal across various cultural contexts and cancer populations.

Furthermore, there was evidence for convergent and discriminant validity of the four-factor model: the results showed a positive correlation between the items of each of the factors and showed that the items from each subscale did not correlate with items of the other subscales, respectively. We should note, however, that although the findings point towards good discriminant validity between factors, there is an exception for CogPCI and CogPCA, with values slightly above the desired. Nonetheless, the literature affirms that these scales represent two separate factors [43] and the results obtained in the present work for partial correlations show that both factors are important to measure different information related to QoL. Therefore, we decided to maintain both factors as separate dimensions, in line with the original scale.

Reliability results supported the dimensionality findings. Our findings indicated very good internal consistency for the factors of the FACT-Cog-v3 (all above 0.91), in line with or even higher than reliability scores found in previous studies [16]. At the item level, all items appeared to be worthy of retention, and the inter-item and item-total correlations indicated the items’ adequacy and homogeneity in measuring the construct that the FACT-Cog-v3 intends to. The values of the Cronbach’s alpha coefficients also did not improve with the removal of any of the items on the four factors. Taken together, these results confirm the theoretical structure with the four subscales.

Results obtained from concurrent validity analysis revealed that all FACT-Cog-v3 subscales scores had moderate and strong positive correlations with the QLQ-C30 cognitive functioning subscale. The QLQ-C30 cognitive functioning subscale is an established self-report scale to demonstrate concurrent validity of the FACT-Cog-v3 [16, 19]. This result is thus consistent with the moderate correlations found between the Chinese [15] and Korean [16] versions of the FACT-Cog-v3 and the QLQ-C30 cognitive functioning subscale, providing support for the concurrent validity of the Portuguese version of the FACT-Cog-v3.

Similar to the other validations of the FACT-Cog-v3, evidence of convergent validity of the scale was confirmed by correlations of this scale with theoretically related constructs. Moderate negative correlations were found with anxiety [3, 15, 44] and depressive [3, 16, 44] symptoms, fatigue [3, 15, 19, 44], and sleep disturbance [3, 44]. Moderate positive correlations were found for global health status [15]. These findings are consistent with previous validation studies [15, 16, 19]. In terms of discriminant validity, weak and moderate negative correlations were obtained for pain and weak negative correlations for nausea/vomiting, as described in Koch et al. [19]. Thus, these results provide further evidence of the FACT-Cog-v3’s discriminant validity.

Despite the encouraging results, this study has some limitations that should be addressed. First, our sample was recruited online, which may represent a selection bias (i.e., selection of those cancer patients who have digital literacy, access to the Internet, and perhaps are more educated and employed). Therefore, future research should recruit participants in-person, to examine if the good psychometric properties verified in this study are maintained with cancer patients with different sociodemographic characteristics. The study’s cross-sectional design is also a limitation, constraining the determination of test–retest reliability. We recommend that the temporal stability of this version should also be examined in the future. In addition to temporal stability, measurement invariance across groups (e.g., sex, age), namely metric, configural, and scalar, should be performed in future studies. Additionally, future studies should consider performing these analyses with bigger samples. Finally, caution is also needed in interpreting these findings, considering the social and health context of the COVID-19 Pandemic in which the study was conducted, since some authors alert for the possible interference of the stress related to this event on cognitive problems reported by cancer survivors [45] and the impact of the COVID-19 disease on cognitive functioning [46]. However, a previous preliminary study conducted outside the context of Pandemic [25] point to similar results, which leads us to believe that it may not have an influence on the validation of the scale.

Notwithstanding these limitations, we believe that our study provides important contributions to the field of CRCI literature, offering evidence of the good psychometric characteristics of the FACT-Cog-v3 scale in a Portuguese sample of patients with non-CNS cancers. Using this measure in clinical practice may contribute to a better understanding of patients’ cognitive difficulties, thus helping to provide proper interventions to mitigate the effects of CRCI and improve QoL in this population. Furthermore, future studies can also use the Portuguese version of the FACT-Cog-v3 to assess the efficacy of cognitive intervention programs in cancer patients.

Conclusions

Cognitive symptoms are one of the most frequent and worrying side effects experienced by patients with non-CNS cancers. Considering its detrimental impact on QoL, it is necessary to provide validated instruments to help researchers and clinicians evaluate the nature and extent of these complaints. This study aimed to analyze the psychometric properties of the Portuguese version of the FACT-Cog-v3. Overall, the 37-items four-factor structure of the scale appears to be a reliable and valid measure of CRCI among patients with non-CNS cancers in Portugal.

Availability of data and materials

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AVE:

Average variance extracted

CRCI:

Cancer-related cognitive impairment

CNS:

Central nervous system

CogPCA:

Perceived cognitive abilities

CogPCI:

Perceived cognitive impairments

CogOth:

Comments from others

CogQoL:

Impact on quality of life

CFI:

Comparative Fit Index

CFA:

Confirmatory factor analysis

CR:

Construct reliability

EORTC QLQ-C30-version 3/QLQ-C30:

European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core-30

FACIT:

Functional Assessment of Chronic Illness Therapy

FACT-Cog-v3:

Functional Assessment of Cancer Therapy-Cognitive Function-Version 3

HADS:

Hospital Anxiety and Depression Scale

PCF:

Perceived cognitive functioning

R2 :

Individual reliability

RMSEA:

Root Mean Square Error of Approximation

SD:

Standard deviations

SRMR:

Standardized Root Mean Square Residual

TLI:

Tucker–Lewis Index

QoL:

Quality of life

WLSMV:

Weighted least squares with mean and variance adjustment

χ 2 :

Chi-square

References

  1. Mayo SJ, Lustberg M, M. Dhillon H, Nakamura ZM, Allen DH, Von Ah D, et al. Cancer-related cognitive impairment in patients with non-central nervous system malignancies: an overview for oncology providers from the MASCC Neurological Complications Study Group. Support Care Cancer. 2021;29(6):2821–40. https://doi.org/10.1007/s00520-020-05860-9

  2. Pullens MJJ, de Vries J, Roukema JA. Subjective cognitive dysfunction in breast cancer patients: a systematic review. Psychooncology. 2010;19(11):1127–38. https://doi.org/10.1002/pon.1673.

    Article  Google Scholar 

  3. Ahles TA, Root JC. Cognitive effects of cancer and cancer treatments. Annu Rev Clin Psychol. 2018;14:425–51. https://doi.org/10.1146/annurev-clinpsy-050817084903.

    Article  Google Scholar 

  4. Hervey-Jumper SL, Monje M. Unravelling the mechanisms of cancer-related cognitive dysfunction in non–central nervous system cancer. JAMA Oncol. 2021;7(9):1311–2. https://doi.org/10.1001/jamaoncol.2021.1900.

    Article  Google Scholar 

  5. Bray VJ, Dhillon HM, Vardy JL. Systematic review of self-reported cognitive function in cancer patients following chemotherapy treatment. J Cancer Surviv. 2018;12(4):537–59. https://doi.org/10.3322/caac.21492.

    Article  Google Scholar 

  6. Costa DSJ, Fardell JE. Why are objective and perceived cognitive function weakly correlated in patients with cancer? J Clin Oncol. 2019;37(14):1154–8. https://doi.org/10.1200/JCO.18.02363.

    Article  Google Scholar 

  7. Wagner LI, Sweet J, Butt Z, Lai J, Cella D. Measuring patient self-reported cognitive function: development of the Functional Assessment of Cancer Therapy – Cognitive Function instrument. J Support Oncol. 2009;7(6):W32–9.

    Google Scholar 

  8. Tannock IF, Ahles TA, Ganz PA, van Dam FS. Cognitive impairment associated with chemotherapy for cancer: report of a workshop. J Clin Oncol. 2004;22(11):2233–9. https://doi.org/10.1200/JCO.2004.08.094.

    Article  Google Scholar 

  9. Costa DSJ, Loh V, Birney DP, Dhillon HM, Fardell JE, Gessler D, et al. The structure of the FACT-Cog v3 in cancer patients, students, and older adults. J Pain Symptom Manag. 2018;55(4):1173–8. https://doi.org/10.1016/j.jpainsymman.2017.12.486.

    Article  Google Scholar 

  10. Lai JS, Butt Z, Wagner L, Sweet JJ, Beaumont JL, Vardy J, et al. Evaluating the dimensionality of perceived cognitive function. J Pain Symptom Manag. 2009;37(6):982–95. https://doi.org/10.1016/j.jpainsymman.2008.07.012.

    Article  Google Scholar 

  11. Savard J, Ganz PA. Subjective or objective measures of cognitive functioning—What’s more important? JAMA Oncol. 2016;2(10):1263–4. https://doi.org/10.1001/jamaoncol.2016.2047.

    Article  Google Scholar 

  12. Aaronson NK, Ahmedzai S, Bergman B, Bullinger M, Cull A, Duez NJ, et al. The European Organization for Research and Treatment of Cancer QLQ-C30: a quality-of-life instrument for use in international clinical trials in Oncology. J Natl Cancer Inst. 1993;85(5):365–76. https://doi.org/10.1093/jnci/85.5.365.

    Article  Google Scholar 

  13. Henneghan AM, Van Dyk K, Kaufmann T, Harrison R, Gibbons C, Heijnen C, et al. Measuring self-reported cancer-related cognitive impairment: recommendations from the Cancer Neuroscience Initiative Working Group. JNCI J Natl Cancer Inst. 2021;113(12):1625–33. https://doi.org/10.1093/jnci/djab027.

    Article  Google Scholar 

  14. Joly F, Lange M, Rigal O, Correia H, Giffard B, Beaumont JL, et al. French version of the Functional Assessment of Cancer Therapy-Cognitive Function (FACT-Cog) version 3. Support Care Cancer. 2012;20(12):3297–305. https://doi.org/10.1007/s00520-012-1439-2.

    Article  Google Scholar 

  15. Cheung YT, Lim SR, Shwe M, Tan YP, Chan A. Psychometric properties and measurement equivalence of the English and Chinese versions of the Functional Assessment of Cancer Therapy-Cognitive in Asian patients with breast cancer. Value Heal. 2013;16(6):1001–13. https://doi.org/10.1016/j.jval.2013.06.017.

    Article  Google Scholar 

  16. Park JH, Bae SH, Jung YS, Jung YM. The psychometric properties of the Korean version of the Functional Assessment of Cancer Therapy-Cognitive (FACT-Cog) in Korean patients with breast cancer. Support Care Cancer. 2015;23(9):2695–703. https://doi.org/10.1007/s00520-015-2632-x.

    Article  Google Scholar 

  17. Miyashita M, Tsukamoto N, Hashimoto M, Kajiwara K, Kako J, Okamura H. Validation of the Japanese version of the Functional Assessment of Cancer Therapy-Cognitive Function Version 3. J Pain Symptom Manag. 2020;59(1):139–46. https://doi.org/10.1016/j.jpainsymman.2019.09.027.

    Article  Google Scholar 

  18. Atasavun Uysal S, Yildiz Kabak V, Karakas Y, Karabulut E, Erdan Kocamaz D, Keser İT, et al. Investigation of the validity and reliability of the Turkish version of the Functional Assessment of Cancer Therapy - Cognitive Function in cancer patients. Palliat Support Care. 2021. https://doi.org/10.1017/S147895152100136X.

    Article  Google Scholar 

  19. Koch V, Wagner LI, Green HJ. Assessing neurocognitive symptoms in cancer patients and controls: psychometric properties of the FACT-Cog3. Curr Psychol. 2021. https://doi.org/10.1007/s12144-021-02088-6.

    Article  Google Scholar 

  20. Pereira F, Santos C. Estudo de adaptação cultural e validação da Functional Assessment of Cancer Therapy-General em cuidados paliativos. Referência. 2011;III Série(n.º5):45–54. https://doi.org/10.12707/RIII1041

  21. World Medical Association. Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2000;284(23):3043–5. https://doi.org/10.1001/jama.284.23.3043.

    Article  Google Scholar 

  22. American Psychological Association. Publication Manual of the American Psychological Association – 7th ed. Washington, DC: American Psychological Association; 2020. https://doi.org/10.1037/rev0000126

  23. Bonomi AE, Cella DF, Hahn EA, Bjordal K, Sperner-Unterweger B, Gangeri L, et al. Multilingual translation of the Functional Assessment of Cancer Therapy (FACT) quality of life measurement system. Qual Life Res. 1996;5(3):309–20. https://doi.org/10.1007/BF00433915.

    Article  Google Scholar 

  24. Eremenco SL, Cella D, Arnold BJ. A comprehensive method for the translation and cross-cultural validation of health status questionnaires. Eval Health Prof. 2005;28(2):212–32. https://doi.org/10.1177/0163278705275342.

    Article  Google Scholar 

  25. Oliveira AF, Santos IM, Torres A. Preliminary validation study of the Portuguese version of the Functional Assessment of Cancer Therapy – Cognitive Function - Version 3 (FACT-Cog-v3). J Stat Heal Decis. 2021;3(1):87–90. https://doi.org/10.34624/jshd.v3i1.24904

  26. Pais-Ribeiro J, Pinto C, Santos C. Validation study of Portuguese version of the QLC-C30-V.3. Psicol Saúde Doenças. 2008;9(1):89–102.

  27. Zigmond AS, Snaith RP. The Hospital Anxiety and Depression Scale. Acta Psychiatr Scand. 1983;67(6):361–70. https://doi.org/10.1111/j.1600-0447.1983.tb09716.x.

    Article  Google Scholar 

  28. Pais-Ribeiro J, Silva I, Ferreira T, Martins A, Meneses R, Baltar M. Validation study of a Portuguese version of the Hospital Anxiety and Depression Scale. Psychol Health Med. 2007;12(2):225–37. https://doi.org/10.1080/13548500500524088.

    Article  Google Scholar 

  29. Rosseel Y. lavaan: An R Package for Structural Equation Modeling. J Stat Softw. 2012;48(2):1–36. https://doi.org/10.18637/jss.v048.i02

  30. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2021.

  31. Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate Data Analysis (8th Edition). Hampshire, UK: Cengage Learning, EMEA; 2019.

  32. Nunnally JC. Psychometric theory. New York: McGraw-Hill; 1978.

    Google Scholar 

  33. Streiner DLN, John RGC. Health measurement scales: a practical guide to their development and use (5th Edition). Oxford: Oxford University Press; 2014.

    Google Scholar 

  34. Kline P. The handbook of psychological testing (2nd Edition). Abingdon: Routledge; 1999.

    Google Scholar 

  35. Guadagnoli E, Velicer WF. Relation of sample size to the stability of component patterns. Psychol Bull. 1988;103(2):265–75. https://doi.org/10.1037/0033-2909.103.2.265.

    Article  Google Scholar 

  36. Wolf EJ, Harrington KM, Clark SL, Miller MW. Sample size requirements for structural equation models: an evaluation of power, bias, and solution propriety. Educ Psychol Meas. 2013;73(6):913–34. https://doi.org/10.1177/0013164413495237.

    Article  Google Scholar 

  37. Brown TA. Confirmatory factor analysis for applied research. New York: The Guilford Press; 2015.

    Google Scholar 

  38. Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55. https://doi.org/10.1080/10705519909540118.

    Article  Google Scholar 

  39. Kline RB. Principles and practice of structural equation modeling. New York: The Guilford Press; 2011.

    Google Scholar 

  40. Marôco J. Analysis of Structural Equations: Theoretical fundamentals, Software & Applications (3rd Edition). Pêro Pinheiro: ReportNumber; 2021.

  41. Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18(1):39–50. https://doi.org/10.1177/002224378101800104.

    Article  Google Scholar 

  42. Ratner B. The correlation coefficient: Its values range between +1/-1, or do they? J Target Meas Anal Mark. 2009;17(2):139–42. https://doi.org/10.1057/jt.2009.5.

    Article  Google Scholar 

  43. Lai JS, Wagner LI, Jacobsen PB, Cella D. Self-reported cognitive concerns and abilities: Two sides of one coin? Psychooncology. 2014;23(10):1133–41. https://doi.org/10.1002/pon.3522.

    Article  Google Scholar 

  44. Bender CM, Thelen BD. Cancer and cognitive changes: The complexity of the problem. Semin Oncol Nurs. 2013;29(4):232–7. https://doi.org/10.1016/j.soncn.2013.08.003.

    Article  Google Scholar 

  45. Nekhlyudov L, Duijts S, Hudson S v., Jones JM, Keogh J, Love B, et al. Addressing the needs of cancer survivors during the COVID-19 pandemic. J Cancer Surviv. 2020;14(5):601–6. https://doi.org/10.1007/s11764-020-00884-w

  46. Becker JH, Lin JJ, Doernberg M, Stone K, Navis A, Festa JR, et al. Assessment of cognitive function in patients after COVID-19 infection. JAMA Netw Open. 2021;4(10):8–11. https://doi.org/10.1001/jamanetworkopen.2021.30645.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the study participants. Furthermore, the authors would like to thank the Functional Assessment of Chronic Illness Therapy (FACIT) organization for giving authorization to use and validate the FACT-Cog-v3 in Portugal.

Funding

This work was supported by national funds through FCT - Fundação para a Ciência e a Tecnologia, I.P., within CINTESIS R&D Unit (UIDB/4255/2020 and UIDP/4255/2020), the project RISE (LA/P/0053/2020), and the William James Center for Research R&D Unit (UIDB/04810/2020). The first author was awarded a PhD fellowship (SFRH/BD/138785/2018). The funding agency had no role in the study design, data collection, and analysis; decision to publish the manuscript; or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

AFO, IMS, and AT contributed to the study conception and design. Material preparation and data collection were performed by AFO, IMS, SF, and AT, and formal analyses were performed by AFO, IMS, and PB–H. The first draft of the manuscript was written by AFO and SF and all authors commented on all versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ana F. Oliveira.

Ethics declarations

Ethics approval and consent to participate

This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics and Deontology Committee of the University of Aveiro (22 January 2020/ No. 30/2019). Informed consent was obtained from all individual participants included in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

. Factor analysis of the 33-item FACT-Cog-v3.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oliveira, A.F., Santos, I.M., Fernandes, S. et al. Validation study of the Functional Assessment of Cancer Therapy-Cognitive Function-Version 3 for the Portuguese population. BMC Psychol 10, 305 (2022). https://doi.org/10.1186/s40359-022-01018-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40359-022-01018-w

Keywords