User:Ongmianli/Center for Epidemiological Studies Depression Scale

Lead section edit

This will be the lead section. This section should give a quick summary of what the assessment is. Here are some pointers (please do not use bullet points when writing article):

  • What are its acronyms?
  • What is its purpose?
  • What population is it intended for? What do the items measure?
  • How long does it take to administer?
  • Who (individual or groups) was it created by?
  • How many questions are inside? Is it multiple choice?
  • What has been its impact on the clinical world in general?
  • Who uses it? Clinicians? Researchers? What settings?

Versions edit

  • What are the versions of this test that exists, if any? For each section, there should be a description of the test.
  • What is its intended population, number of questions and acronyms?

Reliability edit

The rubrics for evaluating reliability and validity are here. You will evaluate the instrument based on these rubrics. Then, you will delete the code for the rubric and complete the table (located after the rubrics). Don't forget to adjust the headings once you copy/paste the table in!

An example using the table from the General Behavior Inventory is attached below.

Example tables edit

Evaluating norms and reliability edit

Rubric for evaluating norms and reliability for assessments (extending Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Adequate Good Excellent Too Good
Norms Mean and standard deviation for total score (and subscores if relevant) from a large, relevant clinical sample Mean and standard deviation for total score (and subscores if relevant) from multiple large, relevant samples, at least one clinical and one nonclinical Same as “good,” but must be from representative sample (i.e., random sampling, or matching to census data) Not a concern
Internal consistency (Cronbach's alpha, split half, etc.) Most evidence shows Cronbach's alpha values of .70 to .79 Most reported alphas .80 to .89 Most reported alphas >= .90 Alpha is also tied to scale length and content coverage - very high alphas may indicate that scale is longer than needed, or that it has a very narrow scope
Inter-rater reliability Most evidence shows kappas of .60-.74, or intraclass correlations of .70-.79 Most reported kappas of .75-.84, ICCs of .80-.89 Most kappas ≥ .85, or ICCs ≥ .90 Very high levels of agreement often achieved by re-rating from audio or transcript
Test-retest reliability (stability) Most evidence shows test-retest correlations ≥ .70 over period of several days or weeks Most evidence shows test-retest correlations ≥ .70 over period of several months Most evidence shows test-retest correlations ≥ .70 over a year or longer Key consideration is appropriate time interval; many constructs would not be stable for years at a time
*Repeatability Bland-Altman plots (Bland & Altman, 1986) plots show small bias, and/or weak trends; coefficient of repeatability is tolerable compared to clinical benchmarks (Vaz, Falkmer, Passmore, Parsons, & Andreou, 2013) Bland-Altman plots and corresponding regressions show no significant bias, and no significant trends; coefficient of repeatability is tolerable Bland-Altman plots and corresponding regressions show no significant bias, and no significant trends across multiple studies; coefficient of repeatability is small enough that it is not clinically concerning Not a concern

Validity edit

Rubric for evaluating validity and utility (extending Hunsley & Mash, 2008 ; *indicates new construct or category)
Criterion Adequate Good Excellent *Too Excellent
Content validity Test developers clearly defined domain and ensured representation of entire set of facets As adequate, plus all elements (items, instructions) evaluated by judges (experts or pilot participants) As good, plus multiple groups of judges and quantitative ratings Not a problem; can point out that many measures do not cover all of the DSM criteria now
Construct validity (e.g., predictive, concurrent, convergent, and discriminant validity) Some independently replicated evidence of construct validity Bulk of independently replicated evidence shows multiple aspects of construct validity As good, plus evidence of incremental validity with respect to other clinical data Not a problem
*Discriminative validity Statistically significant discrimination in multiple samples; Areas Under the Curve (AUCs) < .6 under clinically realistic conditions (i.e., not comparing treatment seeking and healthy youth) AUCs of .60 to <.75 under clinically realistic conditions AUCs of .75 to .90 under clinically realistic conditions AUCs >.90 should trigger careful evaluation of research design and comparison group. More likely to be biased than accurate estimate of clinical performance.
*Prescriptive validity Statistically significant accuracy at identifying a diagnosis with a well-specified matching intervention, or statistically significant moderator of treatment As “adequate,” with good kappa for diagnosis, or significant treatment moderation in more than one sample As “good,” with good kappa for diagnosis in more than one sample, or moderate effect size for treatment moderation Not a problem with the measure or finding, per se; but high predictive validity may obviate need for other assessment components. Compare on utility.
Validity generalization Some evidence supports use with either more than one specific demographic group or in more than one setting Bulk of evidence supports use with either more than one specific demographic group or in multiple settings Bulk of evidence supports use with either more than one specific demographic group and in multiple settings Not a problem
Treatment sensitivity Some evidence of sensitivity to change over course of treatment Independent replications show evidence of sensitivity to change over course of treatment As good, plus sensitive to change across different types of treatments Not a problem
Clinical utility After practical considerations (e.g., costs, ease of administration and scoring, duration, availability of relevant benchmark scores, patient acceptability), assessment data are likely to be clinically useful As adequate, plus published evidence that using the assessment data confers clinical benefit (e.g., better outcome, lower attrition, greater satisfaction) As good, plus independent replication Not a problem

Actual tables to fill in edit

Reliability edit

Rubric for evaluating norms and reliability for the General Behavior Inventory (table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Norms Adequate Multiple convenience samples and research studies, including both clinical and nonclinical samples[citation needed]
Internal consistency (Cronbach’s alpha, split half, etc.) Excellent; too good for some contexts Alphas routinely over .94 for both scales, suggesting that scales could be shortened for many uses[citation needed]
Inter-rater reliability Not applicable Designed originally as a self-report scale; parent and youth report correlate about the same as cross-informant scores correlate in general[1]
Test-retest reliability (stability Good r = .73 over 15 weeks. Evaluated in initial studies,[2] with data also show high stability in clinical trials[citation needed]
Repeatability Not published No published studies formally checking repeatability

Validity edit

Evaluation of validity and utility for the General Behavior Inventory (table from Youngstrom et al., unpublished, extended from Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Content validity Excellent Covers both DSM diagnostic symptoms and a range of associated features[2]
Construct validity (e.g., predictive, concurrent, convergent, and discriminant validity) Excellent Shows convergent validity with other symptom scales, longitudinal prediction of development of mood disorders,[3][4][5] criterion validity via metabolic markers[2][6] and associations with family history of mood disorder.[7] Factor structure complicated;[2][8] the inclusion of “biphasic” or “mixed” mood items creates a lot of cross-loading
Discriminative validity Excellent Multiple studies show that GBI scores discriminate cases with unipolar and bipolar mood disorders from other clinical disorders[2][9][10] effect sizes are among the largest of existing scales[11]
Validity generalization Good Used both as self-report and caregiver report; used in college student[8][12] as well as outpatient[9][13][14] and inpatient clinical samples; translated into multiple languages with good reliability
Treatment sensitivity Good Multiple studies show sensitivity to treatment effects comparable to using interviews by trained raters, including placebo-controlled, masked assignment trials[15][16] Short forms appear to retain sensitivity to treatment effects while substantially reducing burden[16][17]
Clinical utility Good Free (public domain), strong psychometrics, extensive research base. Biggest concerns are length and reading level. Short forms have less research, but are appealing based on reduced burden and promising data

Development and history edit

  • Why was this instrument developed? Why was there a need to do so? What need did it meet?
  • What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
  • How was the scale developed? What was the theoretical background behind it?
  • How are these questions reflected in applications to theories, such as cognitive behavioral therapy (CBT)?
  • If there were previous versions, when were they published?
  • Discuss the theoretical ideas behind the changes

Impact edit

  • What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
  • What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?

Use in other populations edit

  • How widely has it been used? Has it been translated into different languages? Which languages?

Research edit

  • Any recent research done that is pertinent?

Limitations edit

  • If self report, what are usual limitations of self-report?
  • State the status of this assessment (is it copyrighted? If free, link to it).

See also edit

Here, it would be good to link to any related articles on Wikipedia. As we create more assessment pages, this should grow.

For instance:

External links edit

Example page edit

References edit

  1. ^ Achenbach, TM; McConaughy, SH; Howell, CT (March 1987). "Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity". Psychological Bulletin. 101 (2): 213–32. PMID 3562706.
  2. ^ a b c d e Depue, Richard A.; Slater, Judith F.; Wolfstetter-Kausch, Heidi; Klein, Daniel; Goplerud, Eric; Farr, David (1981). "A behavioral paradigm for identifying persons at risk for bipolar depressive disorder: A conceptual framework and five validation studies". Journal of Abnormal Psychology. 90 (5): 381–437. doi:10.1037/0021-843X.90.5.381. {{cite journal}}: |access-date= requires |url= (help)
  3. ^ Klein, DN; Dickstein, S; Taylor, EB; Harding, K (February 1989). "Identifying chronic affective disorders in outpatients: validation of the General Behavior Inventory". Journal of consulting and clinical psychology. 57 (1): 106–11. PMID 2925959. {{cite journal}}: |access-date= requires |url= (help)
  4. ^ Mesman, Esther; Nolen, Willem A.; Reichart, Catrien G.; Wals, Marjolein; Hillegers, Manon H.J. (May 2013). "The Dutch Bipolar Offspring Study: 12-Year Follow-Up". American Journal of Psychiatry. 170 (5): 542–549. doi:10.1176/appi.ajp.2012.12030401. {{cite journal}}: |access-date= requires |url= (help)
  5. ^ Reichart, CG; van der Ende, J; Wals, M; Hillegers, MH; Nolen, WA; Ormel, J; Verhulst, FC (December 2005). "The use of the GBI as predictor of bipolar disorder in a population of adolescent offspring of parents with a bipolar disorder". Journal of affective disorders. 89 (1–3): 147–55. PMID 16260043. {{cite journal}}: |access-date= requires |url= (help)
  6. ^ Depue, RA; Kleiman, RM; Davis, P; Hutchinson, M; Krauss, SP (February 1985). "The behavioral high-risk paradigm and bipolar affective disorder, VIII: Serum free cortisol in nonpatient cyclothymic subjects selected by the General Behavior Inventory". The American journal of psychiatry. 142 (2): 175–81. PMID 3970242. {{cite journal}}: |access-date= requires |url= (help)
  7. ^ Klein, DN; Depue, RA (August 1984). "Continued impairment in persons at risk for bipolar affective disorder: results of a 19-month follow-up study". Journal of abnormal psychology. 93 (3): 345–7. PMID 6470321. {{cite journal}}: |access-date= requires |url= (help)
  8. ^ a b Pendergast, Laura L.; Youngstrom, Eric A.; Brown, Christopher; Jensen, Dane; Abramson, Lyn Y.; Alloy, Lauren B. (2015). "Structural invariance of General Behavior Inventory (GBI) scores in Black and White young adults". Psychological Assessment. 27 (1): 21–30. doi:10.1037/pas0000020. {{cite journal}}: |access-date= requires |url= (help)
  9. ^ a b Danielson, CK; Youngstrom, EA; Findling, RL; Calabrese, JR (February 2003). "Discriminative validity of the general behavior inventory using youth report". Journal of abnormal child psychology. 31 (1): 29–39. PMID 12597697. {{cite journal}}: |access-date= requires |url= (help)
  10. ^ Findling, RL; Youngstrom, EA; Danielson, CK; DelPorto-Bedoya, D; Papish-David, R; Townsend, L; Calabrese, JR (February 2002). "Clinical decision-making using the General Behavior Inventory in juvenile bipolarity". Bipolar disorders. 4 (1): 34–42. PMID 12047493. {{cite journal}}: |access-date= requires |url= (help)
  11. ^ Youngstrom, Eric A.; Genzlinger, Jacquelynne E.; Egerton, Gregory A.; Van Meter, Anna R. (2015). "Multivariate meta-analysis of the discriminative validity of caregiver, youth, and teacher rating scales for pediatric bipolar disorder: Mother knows best about mania". Archives of Scientific Psychology. 3 (1): 112–137. doi:10.1037/arc0000024. {{cite journal}}: |access-date= requires |url= (help)
  12. ^ Alloy, LB; Abramson, LY; Hogan, ME; Whitehouse, WG; Rose, DT; Robinson, MS; Kim, RS; Lapkin, JB (August 2000). "The Temple-Wisconsin Cognitive Vulnerability to Depression Project: lifetime history of axis I psychopathology in individuals at high and low cognitive risk for depression". Journal of abnormal psychology. 109 (3): 403–18. PMID 11016110. {{cite journal}}: |access-date= requires |url= (help)
  13. ^ Klein, Daniel N.; Dickstein, Susan; Taylor, Ellen B.; Harding, Kathryn (1989). "Identifying chronic affective disorders in outpatients: Validation of the General Behavior Inventory". Journal of Consulting and Clinical Psychology. 57 (1): 106–111. doi:10.1037/0022-006X.57.1.106. {{cite journal}}: |access-date= requires |url= (help)
  14. ^ Youngstrom, EA; Findling, RL; Danielson, CK; Calabrese, JR (June 2001). "Discriminative validity of parent report of hypomanic and depressive symptoms on the General Behavior Inventory". Psychological assessment. 13 (2): 267–76. PMID 11433802. {{cite journal}}: |access-date= requires |url= (help)
  15. ^ Findling, RL; Youngstrom, EA; McNamara, NK; Stansbrey, RJ; Wynbrandt, JL; Adegbite, C; Rowles, BM; Demeter, CA; Frazier, TW; Calabrese, JR (January 2012). "Double-blind, randomized, placebo-controlled long-term maintenance study of aripiprazole in children with bipolar disorder". The Journal of clinical psychiatry. 73 (1): 57–63. PMID 22152402. {{cite journal}}: |access-date= requires |url= (help)
  16. ^ a b Youngstrom, E; Zhao, J; Mankoski, R; Forbes, RA; Marcus, RM; Carson, W; McQuade, R; Findling, RL (March 2013). "Clinical significance of treatment effects with aripiprazole versus placebo in a study of manic or mixed episodes associated with pediatric bipolar I disorder". Journal of child and adolescent psychopharmacology. 23 (2): 72–9. PMID 23480324. {{cite journal}}: |access-date= requires |url= (help)
  17. ^ Ong, ML; Youngstrom, EA; Chua, JJ; Halverson, TF; Horwitz, SM; Storfer-Isser, A; Frazier, TW; Fristad, MA; Arnold, LE; Phillips, ML; Birmaher, B; Kowatch, RA; Findling, RL; LAMS, Group (1 July 2016). "Comparing the CASI-4R and the PGBI-10 M for Differentiating Bipolar Spectrum Disorders from Other Outpatient Diagnoses in Youth". Journal of abnormal child psychology. PMID 27364346. {{cite journal}}: |access-date= requires |url= (help)