All Academic, Inc. Research Logo

Info/CitationFAQResearchAll Academic Inc.
Document

When to Use Scott’s π or Krippendorff's α, If Ever?
Unformatted Document Text:  When to Use Scott’s π or Krippendorff's α, If Ever? Page 3 of 36 When to Use Scott’s π or Krippendorff's α, If Ever? Inter-coder or test-retest reliability is widely used to assess measurement quality, in such disciplines as communication, medical studies, and marketing research. Various methods have been proposed to calculate reliability, including Yule’s Y (1912), Bennett et al’s S (1954), Scott’s π (1955), Osgood’s index (1959), Cohen’s κ (1960), Holsti’s coefficient (1969), Maxwell’s RE (1977), Krippendorff's α (1980), Perreault & Leigh’s I r (1989), Feinstein & Cicchetti’s ψ (1990b), Potter & Levine-Donnerstein’s redefined π (1999), and Gwet’s γ (2006). Each has been recommended and used as the general indicator of the same concept, reliability. Yet they offer different formulas and produce different, sometimes drastically different, results. This paper focuses on two of them, Scott's π and Krippendorff's α. I. Reliability vs. Reliabilities Across various disciplines of social sciences and medical studies, Cohen’s κ (1960), which I have discussed in a separate paper, is the most often used indicator of reliability. Scott’s π (1955) is the second most popular, especially among communication researchers. In Social Sciences Index (SSCI), between 1994 and 2010, Scott (1955) was cited 261 times and Krippendorff (1980) 243 times. During the same period, in Communication and Mass Media Complete (CMMC), “Scott’s Pi” had 597 citations, which rose from 11 in 1994 to 61 in 2009; “Krippendorff’s Alpha” had 145 citations, which also rose from 2 in 1994 to 26 in 2009. At the time of our search, the full year statistics were not yet available for 2010. If Krippendorff’s α has not been the most frequently used, it may be among the most strongly recommended. Respected communication methodologists such as Hayes & Krippendorff (2007) and Krippendorff (2004b) argued that α should be the only general indicator to use.

Authors: Zhao, XinShu.
first   previous   Page 3 of 36   next   last



background image
When to Use Scott’s π or Krippendorff's α, If Ever? 
Page 3 of 36 
 
When to Use Scott’s π or Krippendorff's α, If Ever? 
 
Inter-coder or test-retest reliability is widely used to assess measurement quality, in such 
disciplines as communication, medical studies, and marketing research. Various methods have been 
proposed to calculate reliability, including Yule’s Y  (1912), Bennett et al’s S (1954), Scott’s  π (1955), 
Osgood’s index (1959), Cohen’s κ (1960), Holsti’s coefficient (1969), Maxwell’s RE  (1977),  
Krippendorff's α (1980), Perreault & Leigh’s I
(1989), Feinstein & Cicchetti’s ψ (1990b), Potter & 
Levine-Donnerstein’s redefined π (1999), and Gwet’s γ (2006).  Each has been recommended and used as 
the general indicator of the same concept, reliability.  Yet they offer different formulas and produce 
different, sometimes drastically different, results.  This paper focuses on two of them, Scott's π and 
Krippendorff's α. 
 
I.
 
Reliability vs. Reliabilities 
 
Across various disciplines of social sciences and medical studies, Cohen’s κ (1960), which I have 
discussed in a separate paper, is the most often used indicator of reliability.  Scott’s π (1955) is the second 
most popular, especially among communication researchers. 
In Social Sciences Index (SSCI), between 1994 and 2010, Scott (1955) was cited 261 times and 
Krippendorff (1980) 243 times.  During the same period, in Communication and Mass Media Complete 
(CMMC), “Scott’s Pi” had 597 citations, which rose from 11 in 1994 to 61 in 2009; “Krippendorff’s 
Alpha” had 145 citations, which also rose from 2 in 1994 to 26 in 2009.  At the time of our search, the 
full year statistics were not yet available for 2010. 
If Krippendorff’s α has not been the most frequently used, it may be among the most strongly 
recommended.  Respected communication methodologists such as Hayes & Krippendorff (2007) and 
Krippendorff (2004b) argued that α should be the only general indicator to use.   


Convention
Submission, Review, and Scheduling! All Academic Convention can help with all of your abstract management needs and many more. Contact us today for a quote!
Submission - Custom fields, multiple submission types, tracks, audio visual, multiple upload formats, automatic conversion to pdf.
Review - Peer Review, Bulk reviewer assignment, bulk emails, ranking, z-score statistics, and multiple worksheets!
Reports - Many standard and custom reports generated while you wait. Print programs with participant indexes, event grids, and more!
Scheduling - Flexible and convenient grid scheduling within rooms and buildings. Conflict checking and advanced filtering.
Communication - Bulk email tools to help your administrators send reminders and responses. Use form letters, a message center, and much more!
Management - Search tools, duplicate people management, editing tools, submission transfers, many tools to manage a variety of conference management headaches!
Click here for more information.

first   previous   Page 3 of 36   next   last

©2012 All Academic, Inc.