Skip to content

Strength Of Agreement Kappa

The agreement and the pre-agreement actually observed constitute a random agreement. Cohen`s kappa (or simply Kappa) statistic must measure the agreement between two councillors. To better understand the conditional interpretation of Cohen`s kappa coefficients, I followed Cohen`s Kappa coefficient calculation method proposed by Bakeman et al. (1997). The calculations make simplistic assumptions that the two observers were equally accurate and impartial, that the codes were detected with the same precision, that the discrepancies were equally likely, and that, if prevalence varied, it did so with equal grades (Bakeman – Quera, 2011). If you have only two categories, scotts is more reliable than kappa (with confidence intervals using the Eliasziw Thunder method (1992) for the Inter-Rater Agreement (Zwick, 1988). Kappa will only address its maximum theoretical value of 1 if the two observers distribute codes in the same way, i.e. if the corresponding totals are the same. Everything else is less than a perfect match. Nevertheless, the maximum value Kappa could achieve helps, as uneven distributions help interpret the actual value received from Kappa. The equation for the maximum is:[16] If the observed concordance is due only to chance, i.e. if the evaluations are completely independent, then each diagonal element is a product of the two marginalized groups. Cohens Kappa, symbolized by the tiny Greek letter .7), is a robust statistic that is useful for interraterical or intrarate reliability tests.

As with correlation coefficients, it can be between 1 and 1, 0 being the match that can be expected of random odds and 1 constituting a perfect match between the debtors. While Kappa values below 0 are possible, Cohen finds that they are unlikely in practice (8). As with all correlation statistics, Kappa is a standardized value and is therefore interpreted in the same way in several studies. In Table 3 x 3, there are two options that would not at all achieve an agreement (which a number indicates): the probability of a random overall agreement is the probability that they have agreed either on a yes or a no, i.e. the increase in the number of codes leads to a gradually smaller increase in Kappa. If the number of codes is less than five, and especially if K-2, The lower Kappa values are acceptable, but the variability in prevalence must also be taken into account. For only two codes, The highest value of Kappa is .80 observers accurately .95, and the lowest value is the cappa .02 of observers accurately .80. This is a simple procedure when the values are zero and one and the number of data collectors is two.