Showing posts with label Cognition. Show all posts
Showing posts with label Cognition. Show all posts

Tuesday, 17 May 2022

Medication Adherence Rating Scale (MADRS)

Medication Adherence Rating Scale (MADRS)

One study was found that explored the measurement properties of the MADRS in general population adults. Evidence for internal consistency was indeterminate, with one available study rated as poor, as the unidimensionality of the scale was not checked. Evidence for criterion validity was moderate, with evidence for high sensitivity/specificity (>.95) for clinical diagnosis of depression.

CESD‐R

CESDR.

The CESD was recently updated with the CESDR to better map onto DSM criteria, so we focused the search on the updated version, as this version will likely be used in future research and clinical practice. Only one study was found that explored the measurement properties of the CESDR in general population adults without comorbid conditions. Evidence in support of internal consistency was strong with one study rated as excellent methodological quality. Evidence in support of structural validity was strong with the study showing evidence for a single factor solution. Evidence in support of hypothesis testing was moderate, with the study showing acceptable correlations with other measures of similar constructs (r > .58).

DI‐II.

DIII.

I found four studies to assess a range of measurement properties of the BDI‐II in general population adults, without comorbid conditions. There was weak evidence in support of internal consistency—many studies did not calculate Cronbach's alpha for each subscale separately. However, all studies showed support for the internal consistency of the BDI‐II total score with acceptable alphas above (.7). There was weak evidence in support of test‐retest reliability with one fair study (as it was unclear how missing items were handled), with a high alpha (.89). There was strong evidence for content validity in one methodologically excellent study of the BDI‐II in a non-English speaking Kenyan sample. There was moderate evidence in support of structural validity from the two studies. Both studies showed fair evidence for a single factor solution. Evidence for hypothesis testing was moderate—the BDI‐II showed acceptable correlations with other depression measures (r > .57). There was weak evidence for cross-cultural validity as there were weaknesses in the quality of the translations (only one forward/backwards translation), or failure to pretest the items in a sample for interpretability and cultural relevance. There was moderate evidence for criterion validity. The BDI‐II showed adequate sensitivity (>.7) and specificity (>.8) in determining Major Depressive Episodes with clinician ratings used as the criterion.

 

PHQ‐9. Six studies were found that explored a range of measurement properties of the PHQ‐9 in general population adults. There was moderate evidence in support of internal consistency with adequate Cronbach's alphas (>.7) for the unidimensional measure (confirmed using IRT methods and factor analytic methods). There was moderate evidence in support of test‐retest reliability with correlations >.7. There was moderate evidence for structural validity showing consistent evidence for a one-factor solution (using factor analysis). There was moderate evidence for hypothesis testing; the PHQ‐9 correlated strongly with other measures of similar constructs (e.g., the BDI), and support was found for consistent factor structure across time points and subgroups. There was moderate evidence for criterion validity, with acceptable sensitivity and specificity (>.79) in detecting clinical diagnosis of depressive disorder.

ZDS

ZDS

Two studies were found that explored the measurement properties of the ZDS. There was strong evidence in support of structural validity with a 2‐factor solution. There was weak evidence in support of hypothesis testing, showing a significant correlation with other measures of similar constructs (r > .61).

General Note

General Note

Caveats in rating the PANSS are commented on since it has been the standard scale amongst others. Double-blind studies have offered the most solid evidence, whereby independent raters assess the patients at baseline and typically the same raters follow the same patients throughout. If one wishes to maintain true blindness, every assessment can be performed by the different rater, which obviously poses two major problems—feasibility (to assure adequate number of raters) and reliability among raters.

 

Therefore, two possibilities in a typical study should be noted as confounding factors in quantification with the scales. First, the result of the baseline assessment will have a significant impact for later assessments. As for a rater effect at the very baseline, it is reported that a psychiatrist who saw a patient for the first time underrated the PANSS scores by 10%, compared with the ones obtained by the psychiatrist in charge who has known that patient very well.56 Second, if a better psychological interaction between patients and assessors happens with more encounters, patients may feel less guarded to express themselves more frankly (for instance for their hidden delusions).

 

Contrarily, another possibility is assessors get psychologically accustomed to patients, which might not necessarily result in more severity in scoring (in lieu of a possible increase in identifiable symptoms). These issues are expected to affect rater drift within the rater across longitudinal assessments. Use of performance-based, objective rating scales could overcome these issues but they are mostly applicable to cognitive measurements in general and a part of functional scales. As such, although rater effect and rater drift issues have rarely been the target of studies, more work is clearly indicated for the purpose of better ‘quantification’ with the rating scales.

 

Finally, given various needs in patients with schizophrenia, it might be appropriate to make use of the scales that are miscellaneous in nature. Examples are the targeted inventory on problems in schizophrenia: TIP-Sz30 (10 items) and the Investigator’s assessment questionnaire: IAQ57 (10 items). On the other hand, apart from more time requirement and a possibility that patients may not tolerate lengthy assessments, use of multiple scales renders summarizing the data more challenging. In this context, separate reporting of the parent study is common, although tracing the studies is sometimes complicating.

 

The author recommends that global functioning should always be reported with a simple scale since it could represent the most proximal effects of various distal elements in the illness. More work is necessary on ‘subjectivity’ regarding the subjective assessment scales in patients with schizophrenia. Further, it would be useful to have the scale that is comprehensive for both motor plus non-motor adverse effects.

The mini-mental state examination

The mini-mental state examination:

MMSE52 (30 points from seven categories), while time-friendly, has not been widely utilized and might be somewhat rough to evaluate cognition in schizophrenia. This topic is extensively reviewed elsewhere (e.g., in the National Institute of Mental Health’s Measurement and Treatment Research to Improve Cognition in Schizophrenia: MATRICS53). The MATRICS battery consists of 10 tests that represent seven cognitive domains and the time for completion is estimated to be about 65 minutes.


Other briefer scales include the Repeatable Battery for the Assessment of Neuropsychological Status: RBANS54 which is reported to take <30 minutes, and the brief assessment of cognition in schizophrenia: BACS55 which needs <35 minutes to evaluate. Important issues, apart from time burden, are whether stability (as an intermediate or endophenotype) versus changeability in cognition is to be assumed, and how a change in cognitive test scores translates into actual outcome in other domains of the illness. Also, it would be useful to have a concept of ‘responder’ (or treatment-resistance) as has been defined with a ≥20% decrease in the PANSS score and so forth (Suzuki et al., revision submitted).

Cognition

Cognition

Different studies have utilized different assessment measures.

Monday, 16 May 2022

Cognition

 Cognition

While some of the cognitive assessments were performed in only 11% in the years 1999 and 2004, they were rated in 30% of the studies in 2009. And the assessments used showed much more variety in 2009, expanding from classical paper-pencil tests to computerized facial emotion recognition tests to multiple tests that are expressed in the context of a composite cognitive score.

Featured Post

ICD-11 Criteria for Gambling Disorder (6C50)

ICD-11 Criteria for Gambling Disorder (6C50) A collection of dice Foundation URI : http://id.who.int/icd/entity/1041487064 6C50 Gambling d...