eCite Digital Repository

Clinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliability

Citation

Catts, SV and Frost, AD and O'Toole, BI and Carr, VJ and Lewin, T and Neil, AL and Harris, MG and Evans, RW and Crissman, BR and Eadie, K, Clinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliability, Australian and New Zealand Journal of Psychiatry, 45, (1) pp. 63-75. ISSN 0004-8674 (2011) [Refereed Article]

Copyright Statement

Copyright 2011 Royal Australian and New Zealand College of Psychiatrists

DOI: doi:10.3109/00048674.2010.524621

Abstract

AIM: Clinical practice improvement carried out in a quality assurance framework relies on routinely collected data using clinical indicators. Herein we describe the development, minimum training requirements, and inter-rater agreement of indicators that were used in an Australian multi-site evaluation of the effectiveness of early psychosis (EP) teams. METHODS: Surveys of clinician opinion and face-to-face consensus-building meetings were used to select and conceptually define indicators. Operationalization of definitions was achieved by iterative refinement until clinicians could be quickly trained to code indicators reliably. Calculation of percentage agreement with expert consensus coding was based on ratings of paper-based clinical vignettes embedded in a 2-h clinician training package. RESULTS: Consensually agreed upon conceptual definitions for seven clinical indicators judged most relevant to evaluating EP teams were operationalized for ease-of-training. Brief training enabled typical clinicians to code indicators with acceptable percentage agreement (60% to 86%). For indicators of suicide risk, psychosocial function, and family functioning this level of agreement was only possible with less precise 'broad range' expert consensus scores. Estimated kappa values indicated fair to good inter-rater reliability (kappa > 0.65). Inspection of contingency tables (coding category by health service) and modal scores across services suggested consistent, unbiased coding across services. CONCLUSIONS: Clinicians are able to agree upon what information is essential to routinely evaluate clinical practice. Simple indicators of this information can be designed and coding rules can be reliably applied to written vignettes after brief training. The real world feasibility of the indicators remains to be tested in field trials.

Item Details

Item Type:Refereed Article
Keywords:programme evaluation , fi rst episode , schizophrenia , quality , practice improvement
Research Division:Medical and Health Sciences
Research Group:Public Health and Health Services
Research Field:Mental Health
Objective Division:Health
Objective Group:Public Health (excl. Specific Population Health)
Objective Field:Mental Health
Author:Neil, AL (Dr Amanda Neil)
ID Code:93278
Year Published:2011
Deposited By:Menzies Institute for Medical Research
Deposited On:2014-07-24
Last Modified:2016-10-18
Downloads:0

Repository Staff Only: item control page