0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.82.122.194. Please contact the publisher to request reinstatement.
Article |

Quantification of Agreement in Psychiatric Diagnosis Revisited

Patrick E. Shrout, PhD; Robert L. Spitzer, MD; Joseph L. Fleiss, PhD
Arch Gen Psychiatry. 1987;44(2):172-177. doi:10.1001/archpsyc.1987.01800140084013.
Text Size: A A A
Published online

Eighteen years ago in this journal, Spitzer and colleagues1 published "Quantification of Agreement in Psychiatric Diagnosis," in which they argued that a new measure, Cohen's k statistic,2 was the appropriate index of diagnostic agreement in psychiatry. They pointed out that other measures of diagnostic reliability then in use, such as the total percent agreement and the contingency coefficient, were flawed as indexes of agreement since they either overestimated the discriminating power of the diagnosticians or were affected by associations among the diagnoses other than strict agreement. The new statistic seemed to overcome the weaknesses of the other measures. It took into account the fact that raters agree by chance alone some of the time, and it only gave a perfect value if there was total agreement among the raters. Furthermore, generalizations of the simple k statistic were already available. This family of statistics could be used to assess

Topics

Sign in

Create a free personal account to sign up for alerts, share articles, and more.

Purchase Options

• Buy this article
• Subscribe to the journal

First Page Preview

View Large
/>
First page PDF preview

Figures

Tables

References

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Sign in

Create a free personal account to sign up for alerts, share articles, and more.

Purchase Options

• Buy this article
• Subscribe to the journal

Related Content

Customize your page view by dragging & repositioning the boxes below.

Jobs
brightcove.createExperiences();