欢迎来到麦多课文档分享! | 帮助中心 海量文档,免费浏览,给你所需,享你所想!
麦多课文档分享
全部分类
  • 标准规范>
  • 教学课件>
  • 考试资料>
  • 办公文档>
  • 学术论文>
  • 行业资料>
  • 易语言源码>
  • ImageVerifierCode 换一换
    首页 麦多课文档分享 > 资源分类 > PPT文档下载
    分享到微信 分享到微博 分享到QQ空间

    Inter-rater Reliability of Clinical Ratings- A Brief Primer on .ppt

    • 资源ID:376487       资源大小:103KB        全文页数:11页
    • 资源格式: PPT        下载积分:2000积分
    快捷下载 游客一键下载
    账号登录下载
    微信登录下载
    二维码
    微信扫一扫登录
    下载资源需要2000积分(如需开发票,请勿充值!)
    邮箱/手机:
    温馨提示:
    如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如需开发票,请勿充值!如填写123,账号就是123,密码也是123。
    支付方式: 支付宝扫码支付    微信扫码支付   
    验证码:   换一换

    加入VIP,交流精品资源
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    Inter-rater Reliability of Clinical Ratings- A Brief Primer on .ppt

    1、Inter-rater Reliability of Clinical Ratings: A Brief Primer on Kappa,Daniel H. Mathalon, Ph.D., M.D.Department of Psychiatry Yale University School of Medicine,Inter-rater Reliability of Clinical Interview Based Measures,Ratings of clinical severity for specific symptom domains (e.g, PANSS, BPRS, SA

    2、PS, SANS) Continuous scales Use intraclass correlations to assess inter-rater reliability. Diagnostic Assessment Categorical Data / Nominal Scale Data How do we quantify reliability between diagnosticians? Percent Agreement, Chi-Square, Kappa,Rater 2,Rater 1,Category,nij=number of casesfalling into

    3、cell=freq of joint event ij,n=total number of cases,pij= nij / n = proportion of casesfalling into particular cell.,Two raters classify n cases into k mutually exclusive categories.,Reliability by Percentage Agreement = ipii = 1/n inii,Percent Agreement Fails to Consider Agreement by Chance,Rater 1,

    4、Rater 2,Assume that two raters whose judgments are completely independent (i.e., not influenced by the true diagnostic status of the patient) each diagnose 90% of cases to have schizophrenia and 10% of cases to not have schizophrenia (i.e., Other). Expected agreement by chance for each category obta

    5、ined by multiplying the marginal probabilities together. Can get Percentage Agreement of 82% strictly by chance.,Proportion Agreement = .82,.90 x .90 = .81,.10 x .10 = .01,Chi-Square Test of Association as Proposed Solution,Rater 1,Rater 2, Can perform a Chi-Square Test of Association to test null h

    6、ypothesis that the two raters judgments are independent. To reject independence, show that observed agreement departs from what would be expected by chance alone. Chi-Square = cells (Observed - Expected)2 / Expected Problem: In example below, we have a perfect association between the Raters with zer

    7、o agreement. Chi-Square is a test of Association, not Agreement. It is sensitive to any departure from chance agreement, even when the dependency between the raters judgments involves perfect non-agreement. So, we cannot use Chi-Square Test to assess agreement between raters.,Kappa Coefficient (Cohe

    8、n, 1960),Rater 2,Rater1,pi. x p.i .39 .075 .01,High reliability requires that the frequencies along the diagonal should be chance and off diagonal frequencies should be chance. Use marginal frequencies/probabilities to estimate chance agreement.,Proportion agreement observed, po= ipii = 1/n inii,Pro

    9、portion agreement expected by chance, pc= ipi. x p.i,Interpretations of Kappa K = P (agreement | no agreement by chance) 1-pc = 1- .475 = .525 of cases where no agreement by chance po - pc = .7- .475 = .225 of cases are those non-chance agreement cases where observers agreed.Kappa is the probability

    10、 that judges will agree given no agreement by chance. Can test Ho that Kappa = 0, Kappa is normally distributed with large samples, can test significance using normal distribution. Can erect confidence intervals for Kappa.,Weighted Kappa Coefficient,Rater 2,Rater 1,Can assign weights, wij, to classi

    11、fication errors according to their seriousness using ratio scale weights.,po(w) - pc(w),Kappa Rules of Thumb,K .75 is considered excellent agreement. K .46 is considered poor agreement.,Is an intraclass correlation coefficient ( except for factor of 1/n) when weights have following property: wij = 1

    12、 - (i - j)2,Weighted Kappa and the ICC,(k - 1) 2,Problems with Kappa,Affected by base rates of diagnoses. Cant easily compare across studies that have different base rates, either in the population, or in the reliability study. Chance agreement is a problem? When the null hypothesis of rater independence is not met (which is most of the time), the estimate of chance agreement is inaccurate and possibly inappropriate).,


    注意事项

    本文(Inter-rater Reliability of Clinical Ratings- A Brief Primer on .ppt)为本站会员(confusegate185)主动上传,麦多课文档分享仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文档分享(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
    备案/许可证编号:苏ICP备17064731号-1 

    收起
    展开