pingouin.intraclass_corr#

pingouin.intraclass_corr(data=None, targets=None, raters=None, ratings=None, nan_policy='raise')[source]#

Intraclass correlation.

Parameters:
datapandas.DataFrame

Long-format dataframe. Data must be fully balanced.

targetsstring

Name of column in data containing the targets.

ratersstring

Name of column in data containing the raters.

ratingsstring

Name of column in data containing the ratings.

nan_policystr

Defines how to handle when input contains missing values (nan). ‘raise’ (default) throws an error, ‘omit’ performs the calculations after deleting target(s) with one or more missing values (= listwise deletion).

Added in version 0.3.0.

Returns:
statspandas.DataFrame

Output dataframe:

  • 'Type': ICC type

  • 'Description': description of the ICC

  • 'ICC': intraclass correlation

  • 'F': F statistic

  • 'df1': numerator degree of freedom

  • 'df2': denominator degree of freedom

  • 'pval': p-value

  • 'CI95%': 95% confidence intervals around the ICC

Notes

The intraclass correlation (ICC, [1]) assesses the reliability of ratings by comparing the variability of different ratings of the same subject to the total variation across all ratings and all subjects.

Shrout and Fleiss (1979) [2] describe six cases of reliability of ratings done by \(k\) raters on \(n\) targets. Pingouin returns all six cases with corresponding F and p-values, as well as 95% confidence intervals.

From the documentation of the ICC function in the psych R package:

  • ICC1: Each target is rated by a different rater and the raters are selected at random. This is a one-way ANOVA fixed effects model.

  • ICC2: A random sample of \(k\) raters rate each target. The measure is one of absolute agreement in the ratings. ICC1 is sensitive to differences in means between raters and is a measure of absolute agreement.

  • ICC3: A fixed set of \(k\) raters rate each target. There is no generalization to a larger population of raters. ICC2 and ICC3 remove mean differences between raters, but are sensitive to interactions. The difference between ICC2 and ICC3 is whether raters are seen as fixed or random effects.

Then, for each of these cases, the reliability can either be estimated for a single rating or for the average of \(k\) ratings. The 1 rating case is equivalent to the average intercorrelation, while the \(k\) rating case is equivalent to the Spearman Brown adjusted reliability. ICC1k, ICC2k, ICC3K reflect the means of \(k\) raters.

This function has been tested against the ICC function of the R psych package. Note however that contrarily to the R implementation, the current implementation does not use linear mixed effect but regular ANOVA, which means that it only works with complete-case data (no missing values).

References

[2]

Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological bulletin, 86(2), 420.

Examples

ICCs of wine quality assessed by 4 judges.

>>> import pingouin as pg
>>> data = pg.read_dataset('icc')
>>> icc = pg.intraclass_corr(data=data, targets='Wine', raters='Judge',
...                          ratings='Scores').round(3)
>>> icc.set_index("Type")
                   Description    ICC       F  df1  df2  pval         CI95%
Type
ICC1    Single raters absolute  0.728  11.680    7   24   0.0  [0.43, 0.93]
ICC2      Single random raters  0.728  11.787    7   21   0.0  [0.43, 0.93]
ICC3       Single fixed raters  0.729  11.787    7   21   0.0  [0.43, 0.93]
ICC1k  Average raters absolute  0.914  11.680    7   24   0.0  [0.75, 0.98]
ICC2k    Average random raters  0.914  11.787    7   21   0.0  [0.75, 0.98]
ICC3k     Average fixed raters  0.915  11.787    7   21   0.0  [0.75, 0.98]