Jump to content

Search the Community

Showing results for tags 'inter-rater'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Ask a question or report a problem
    • NVivo for Mac
    • NVivo 10 for Windows and NVivo 9 for Windows
    • NVivo 11 for Windows and NVivo 12 for Windows
  • Make a suggestion
    • NVivo for Mac
    • NVivo for Windows
  • NVivo 10 for Mac Beta

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 2 results

  1. Nvivo calculates inter-rater reliability based on how much agreement coders have on a per character basis. That is, how much do they agree to highlight certain sections of the sentence. This is NOT the unit of analysis that I need. I need to conduct a coding comparison based on the codes used per QUESTION or per block/paragraph of text. Has anyone found a work around that allows the unit of analysis to be based on question or block of text INSTEAD of per character?
  2. Hi, I have a dataset with one column and many rows. Each field (or cell) in the dataset contains text that is being coded by two people. I have read the manual and see how to run the comparison query. This apparently compares the nodes based on the number of characters that are selected and added into each node. Instead, I just want to know whether a cell contains any text that has been coded to a node by coder 1 and then compare that to coder 2. In other words, I don't care about the exact text highlighted and added to a node, I just want to know if the field contains any text coded to the node or not, and then compare those. This is hard to explain in writing, but hopefully it makes some sense. Thanks for the help! ___________ Example: Each row of the single column of the dataset represents a survey response. Two coders code are coding for whether the survey response is generally positive or generally negative. I want to see the agreement between coding for positive and negative at the level of the review, not by the exact text selected and coded into the node.
×