Reporting completeness and transparency of meta-analyses of depression screening tool accuracy: A comparison of meta-analyses published before and after the PRISMA statement
Introduction
Major depressive disorder is present in 5–10% of primary care patients [1] and 10–20% of patients with acute and chronic medical conditions [2], [3], [4], [5], [6], [7]. Effective interventions to reduce the burden of depression are available, but care is often inconsistent. Many depressed patients are not diagnosed, and a high proportion of patients treated for depression do not meet diagnostic criteria [8], [9], [10], [11], [12]. Routine screening, which involves using self-report questionnaires to identify patients who may have depression, has been proposed to improve depression recognition and management [13], [14]. Recommendations for depression screening, however, are controversial and vary substantially across national guidelines and policies [15].
The US Preventative Services Task Force recommends screening adults for depression in primary care settings [16]. A 2010 guideline by the UK National Institute for Health and Care Excellence, on the other hand, did not recommend routine depression screening, but suggested that clinicians be alert to depressive symptoms in patients who may be at an increased risk, such as those with a history of depression [1]. The Canadian Task Force on Preventative Health Care (CTFPHC) similarly recommends against depression screening in primary care [17]. In its 2013 guideline statement, the CTFPHC raised concerns about the quality of evidence on the accuracy of depression screening tools to identify new cases of depression, including evidence reported in systematic reviews and meta-analyses [17].
Meta-analyses are cited more than any other study design [18] and are prioritized in grading evidence for practice guidelines [19]. When both conducted rigorously and reported completely and transparently, meta-analyses can provide clear evidence on important health care questions [20]. Results from primary studies on the diagnostic accuracy of depression screening tools, however, are often inconsistent, and it has been suggested that meta-analyses may not adequately account for bias and methodological limitations in these studies [21]. A recent study used the AMSTAR tool adapted to assess the methodological quality of meta-analyses of the diagnostic test accuracy of depression screening tools and found that methods were inadequate in the majority. Only 6 of 21 meta-analyses complied with even 7 of 14 quality rating items [22]. The ability to confidently interpret and utilize evidence from meta-analyses depends on both the rigor of the conduct of the meta-analyses and the transparency and completeness of reporting.
The PRISMA checklist is a 27-item tool that was developed to guide the reporting of systematic reviews and meta-analyses of randomized controlled trials [20]. In the absence of a PRISMA checklist designed for reviews of diagnostic test accuracy, we applied the PRISMA checklist with certain items adapted to reflect reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies. The objective of our study was to evaluate the transparency and completeness of reporting in meta-analyses of the diagnostic accuracy of depression screening tools, using the adapted PRISMA tool. As part of this, we compared the transparency and completeness of meta-analyses reporting prior to the publication of PRISMA and post-PRISMA.
Section snippets
Identification of meta-analyses on the diagnostic accuracy of depression screening tools
We searched MEDLINE and PsycINFO (both on the OvidSP platform) from January 1, 2005 through March 15, 2016 for meta-analyses in any language that reported on the diagnostic accuracy of depression screening tools. We restricted the search to this period of time in order to identify relatively recent meta-analyses. We adapted a MEDLINE search strategy originally designed to identify primary studies on the diagnostic accuracy of depression screening tools, which was developed by a medical
Results
The electronic database search yielded 1522 unique title and abstracts for review. Of these, 1492 were excluded after title and abstract review because they did not report results from a meta-analysis or because the study was not related to the diagnostic accuracy of a depression screening tool. Of the 30 articles that underwent full-text review, 9 were excluded (see Appendix C), resulting in 21 eligible meta-analyses [28], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41],
Discussion
The main findings of this study were that (1) close to half (47%) of adapted PRISMA items were fulfilled by the majority of 21 recent meta-analyses of the diagnostic test accuracy of depression screening; (2) 12 of 21 meta-analyses had yes ratings for at least half of the adapted PRISMA items, and (3) the mean number of items where PRISMA criteria were fulfilled increased from 13 pre-PRISMA to 17 post-PRISMA.
Apparent changes pre- to post-PRISMA, however, may have been related to authorship
Conclusions
In conclusion, the present study found that 12 of 21 meta-analyses of the diagnostic accuracy of depression screening tools met half of the adapted PRISMA items related to reporting transparency. There was a wide range of adherence to adapted PRISMA items with 11 items being fulfilled in at least 80% of meta-analyses, but 9 items fulfilled in < 25% of meta-analyses. Further, even studies published post-PRISMA had deficiencies in reporting, which have been highlighted. To accurately interpret
Conflicts of interest
All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf and declare that no authors have any conflict of interest disclosures for the past 3-year reporting period.
Acknowledgements
Ms. Rice was supported by a Fonds de la recherche en du Québec - Santé (FRQS) Masters Scholarship. Dr. Thombs was supported by an Investigator Salary Award from the Arthritis Society. There was no specific funding for this study, and no funders had any role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Authors had full access to the data and can take responsibility for the integrity of the data and the accuracy of the data analysis.
References (51)
- et al.
Mood disorders in the medically ill: scientific review and recommendations
Biol. Psychiatry
(2005) - et al.
Major depression after breast cancer: a review of epidemiology and treatment
Gen. Hosp. Psychiatry
(2008) - et al.
Epidemiology of comorbid coronary artery disease and depression
Biol. Psychiatry
(2003) - et al.
What is the association between quality of treatment for depression and patient outcomes? A cohort study of adults consulting in primary care
J. Affect. Disord.
(2013) - et al.
Clinical diagnosis of depression in primary care: a meta-analysis
Lancet
(2009) - et al.
Methodological quality of meta-analyses of the diagnostic accuracy of depression screening tools
J. Psychosom. Res.
(2016) - et al.
The Hospital Anxiety and Depression Scale: a diagnostic meta-analysis of case-finding ability
J. Psychosom. Res.
(2010) - et al.
A diagnostic meta-analysis of the Patient Health Questionnaire-9 (PHQ-9) algorithm scoring method as a screen for depression
Gen. Hosp. Psychiatry
(2015) - et al.
Which version of the geriatric depression scale is most useful in medical settings and nursing homes? Diagnostic validity meta-analysis
Am. J. Geriatr. Psychiatry
(2010) - et al.
Diagnostic validity and added value of the Geriatric Depression Scale for depression in primary care: a meta-analysis of GDS30 and GDS15
J. Affect. Disord.
(2010)
Meta-analysis of screening and case finding tools for depression in cancer: evidence based recommendations for clinical practice on behalf of the Depression in Cancer Care consensus group
J. Affect. Disord.
Diagnostic validity of the Hospital Anxiety and Depression Scale (HADS) in cancer and palliative settings: a meta-analysis
J. Affect. Disord.
Symptom screening scales for detecting major depressive disorder in children and adolescents: a systematic review and meta-analysis of reliability, validity and diagnostic utility
J. Affect. Disord.
Diagnostic accuracy of the mood module of the Patient Health Questionnaire: a systematic review
Gen. Hosp. Psychiatry
Screening and case finding for major depressive disorder using the Patient Health Questionnaire (PHQ-9): a meta-analysis
Gen. Hosp. Psychiatry
Depression: The Treatment and Management of Depression in Adults (Updated Edition)
The prevalence of co-morbid depression in adults with Type 2 diabetes: a systematic review and meta-analysis
Diabet. Med.
The burden of depression in patients with rheumatoid arthritis
Rheumatology (Oxford)
Prevalence of depression in survivors of acute myocardial infarction
J. Gen. Intern. Med.
Guideline concordance of treatment for depressive disorders in Canada
Soc. Psychiatry Psychiatr. Epidemiol.
Clinician-identified depression in community settings: concordance with structured-interview diagnoses
Psychother. Psychosom.
Proportion of antidepressants prescribed without a psychiatric diagnosis is growing
Health Aff.
Screening for depression in adults: a summary of the evidence for the U.S. Preventive Services Task Force
Ann. Intern. Med.
Should general practitioners be testing for depression?
Br. J. Gen. Pract.
Does depression screening improve depression outcomes in primary care?
BMJ
Cited by (11)
Reporting transparency and completeness in trials: Paper 4 - reporting of randomised controlled trials conducted using routinely collected electronic records – room for improvement
2022, Journal of Clinical EpidemiologyCitation Excerpt :The reporting of items was categorized into ‘adequately reported’, ‘partially reported’, ‘inadequately or not reported’ and ‘not applicable’. A coding manual was devised to ensure objective assessment of reporting (Appendix 3) based upon similar previous reviews [20,21]. The data extraction rules and coding manual were piloted on five publications by four authors to clarify wording and calibrate agreement between reviewers.
Nearly 80 systematic reviews were published each day: Observational study on trends in epidemiology and reporting over the years 2000-2019
2021, Journal of Clinical EpidemiologyCitation Excerpt :It can be assumed that improvement in reporting can be attributed to reporting guidelines. Although there is evidence that the introduction of PRISMA has substantially improved reporting of SRs [17,18], it should be noted that PRISMA was only introduced in 2009 [19]. There is absence of similar evidence for the predecessor of PRISMA, the Quality of Reporting of Meta-analyses statement [20].
Probability of major depression diagnostic classification based on the SCID, CIDI and MINI diagnostic interviews controlling for Hospital Anxiety and Depression Scale – Depression subscale scores: An individual participant data meta-analysis of 73 primary studies
2020, Journal of Psychosomatic ResearchCitation Excerpt :As described by its developers, it is intended to be over-inclusive in classifying disorders [7]. Despite the different designs and intended uses of semi-structured interviews, fully structured interviews (MINI excluded), and the MINI, these instruments are typically treated as equivalent reference standards for major depression classification in research, including in evidence syntheses [8]. Only five small studies, which each included only 6–22 cases of major depression based on semi-structured interviews and 8–61 cases based on fully structured interviews, have directly compared different types of diagnostic interviews for major depression [3,9–12].
PRISMA and AMSTAR show systematic reviews on health literacy and cancer screening are of good quality
2018, Journal of Clinical EpidemiologyCitation Excerpt :Both tools have received substantial treatment in the literature, and readers are referred to other references for more information [12–14]. Adherence to PRISMA and AMSTAR has been shown to be problematic in many diverse areas of health care research, for example, meta-analyses of the diagnostic accuracy of depression screening tools [15], SRs of burn care management [16], and SRs of hand and wrist pathology [17]. We included SRs published in peer-reviewed journals that examined the association between health literacy and cancer screening.
Probability of major depression diagnostic classification using semi-structured versus fully structured diagnostic interviews
2018, British Journal of Psychiatry