Jump to content

Inter-rater reliability


Recommended Posts

Guest AGi

I am trying to measure inter-rater reliability between two different coders who have coded the same set of interviews separately, using their respective NVivo accounts, (so essentially, there are two separate projects, one from coder A, and one from coder B, both containing the same files same codes, but they have been coded separately).

I tried going through the steps listed in this document to conduct a coding comparison query am a little confused as to how this would be able to measure agreement between the two projects. As listed towards the bottom of the document, when running the query all the Kappa  coefficients are either 0 or 1, so I know something wrong. 

I have also tried merging the two projects together into one new project, and then running the coding query again. This resulted in coefficients of all 0s and 1s, which is also not accurate. I noticed that there are some discrepancies in the way the files and nodes are organized (even though the codes are the same) between Coder As project and Coder Bs project.  

For a third attempt, I tried to consolidate the Node and file classifications in both projects (A & B), and then merged them again, and then ran the Coding comparison query. In doing this, I got a Kappa coefficient of 1s across everything, which also can not be correct. Is there something that I am missing/a directory/website/resource that would help me walk through this process of conducting inter-rater reliability between these two projects? Please help and thank you in advance!! 

Link to post
Share on other sites
  • 3 weeks later...

Hi,

I just would like to share my experience in ensuring inter-rater reliability of my data. TBH, I didn't do the way you did, though I randomly selected around %20 of the coded data I had, (based on research, so that percentage depends on the number of participants you have). I created a google form document, I put on the data I had selected with multiple choice options of codings, and let the other coders to code separately according to their own perspective. Then, I could see how much did we all agree?. I hope this helps.

Link to post
Share on other sites
  • 2 months later...
Guest Estelle P.

Hi everyone, 

 

Did anyone have any luck with this? I am also trying to undertake inter-rater reliability comparing two projects of different users but I have the same issues listed by AGi. 

 

It looks like this comparison is usually carried out when two users code the same files (while each logging in with their username). I might try to go down that road but happy to have insights from anyone if comparing two projects work out! 

 

Best, 

 

 

Link to post
Share on other sites
  • 3 weeks later...

Dear All,

Estelle is correct. Two users have to code the same files (while each logging in with their username). If they are working in separate projects, you will have to merge those projects. It is important that the hierarchical coding structure is identical in both projects before merging (which is where I think you have gone wrong AGi). Try it first which a couple of documents and a couple of codes.

Link to post
Share on other sites

Hello Agi and Estelle,

For detailed instructions about "Coding comparison query" please refer to our official help documentation here. You will find explanations and how-to information for the unexpected results.

Link to post
Share on other sites
Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Privacy Policyhttps://www.qsrinternational.com/privacy-policyhis site, you agree to our Terms of Use.