I have some data which describe residential units for people with learning disability, variables like how nice the furnishings are, the level of psychiatric symptomology on the unit, how happy the staff are, stuff like that.

I want to check to see if we are measuring the right things- e.g. do units with happier staff have nicer atmospheres, do units with nice furnishings have happy staff, that kind of thing.

The problem is that I only have data for 8 units (averages within each, e.g. 10 staff give their responses on how happy they are, this is the average for the unit) so I can't really use linear regression to see whether the things that we have measured affect each other. I've drawn some scatterplots for all of the data and on the whole I would say there does appear to be a linear relationship in the way I would expect. But as I say with 8 units it's not really going to generate any statistics.

I had the bright idea of ordering each unit across all variables, and then comparing the rank orders somehow. If we are measuring the right things then the ranks should be similar, like this:

`Unit 1 (ranks across all variables): 1,1,1,1,1,1,1,1,1 Unit 2: 2,2,2,2,2,2,2,2,2 `

etc.

whereas if I'm wrong, and the variables aren't important to each other, I will get this:

`Unit 1: 1,2,3,4,5,6,7,8 Unit 2: 8,7,6,5,4,3,2,1 Unit 3: 4,5,6,7,8,1,2,3 `

etc.

Here's what I get:

`Unit1 7 5 5.0 5 3 4 5 3 Unit2 6 2 4.0 6 5 3 2 5 Unit3 3 7 7.5 1 4 1 1 1 Unit4 4 4 3.0 7 6 7 7 8 Unit5 5 3 1.0 4 2 5 6 7 Unit6 2 6 6.0 8 8 8 8 6 Unit7 1 8 7.5 3 7 6 4 4 Unit8 8 1 2.0 2 1 2 3 2 `

It looks pretty good to me, except for the first column.

Any thoughts on this? Is is a proper statistic that I haven't heard of? Or is there something reasonably robust I can do with these results?

Sorry for the lengthy question, many thanks in advance!

**Contents**hide

#### Best Answer

I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to "worst". You expect that the rank order will be similar among "raters". This seems to be an application for Kendall's concordance coefficient $W$ of inter-rater agreement. In R

`> rtr1 <- c(1, 6, 3, 2, 5, 4) # rank order from "rater" 1 > rtr2 <- c(1, 5, 6, 2, 4, 3) # "rater" 2 > rtr3 <- c(2, 3, 6, 5, 4, 1) # "rater" 3 > ratings <- cbind(rtr1, rtr2, rtr3) > library(irr) # for kendall() > kendall(ratings) Kendall's coefficient of concordance W Subjects = 6 Raters = 3 W = 0.568 Chisq(5) = 8.52 p-value = 0.130 `

Edit: This is equivalent to the Friedman-Test for dependent samples:

`> rtrAll <- c(rtr1, rtr2, rtr3) > nBl <- 3 # number of blocks / raters > P <- 6 # number of dependent samples / units > IV <- factor(rep(1:P, nBl)) # factor sample / unit > blocks <- factor(rep(1:nBl, each=P)) # factor blocks / raters > friedman.test(rtrAll, IV, blocks) Friedman rank sum test data: rtrAll, IV and blocks Friedman chi-squared = 8.5238, df = 5, p-value = 0.1296 `

### Similar Posts:

- Solved – Calculating interrater agreement for multiple choice questions
- Solved – How to this rating have negative Fleiss Kappa
- Solved – Standard Error of Measurement (SEM) in Inter-rater reliability
- Solved – Which measure for inter-rater agreement for continuous data of 2 raters about multiple subjects in multiple situations
- Solved – Inter-rater reliability measure with multiple categories per item