Guest A_G Posted January 12 Report Share Posted January 12 I am trying to measure inter-rater reliability between two different coders who have coded the same set of interviews separately, using their respective NVivo accounts, (so essentially, there are two separate projects, one from coder A, and one from coder B, both containing the same files same codes, but they have been coded separately). I tried going through the steps listed in this document to conduct a coding comparison query am a little confused as to how this would be able to measure agreement between the two projects. As listed towards the bottom of the document, when running the query all the Kappa coefficients are either 0 or 1, so I know something wrong. I have also tried merging the two projects together into one new project, and then running the coding query again. This resulted in coefficients of all 0s and 1s, which is also not accurate. I noticed that there are some discrepancies in the way the files and nodes are organized (even though the codes are the same) between Coder As project and Coder Bs project. For a third attempt, I tried to consolidate the Node and file classifications in both projects (A & B), and then merged them again, and then ran the Coding comparison query. In doing this, I got a Kappa coefficient of 1s across everything, which also can not be correct. Is there something that I am missing/a directory/website/resource that would help me walk through this process of conducting inter-rater reliability between these two projects? Please help and thank you in advance!! Quote Link to post Share on other sites
Recommended Posts