Jump to content

Search the Community

Showing results for tags 'kappa'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Ask a question or report a problem
    • NVivo for Mac
    • NVivo 10 for Windows and NVivo 9 for Windows
    • NVivo 11 for Windows and NVivo 12 for Windows
  • Make a suggestion
    • NVivo for Mac
    • NVivo for Windows
  • NVivo 10 for Mac Beta

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 4 results

  1. Hi, I was wondering if the community can help me understand something. I am trying to finish my thesis which involves a content analysis of a subsample of roughly 104 public comments. I developed a codebook with which myself and a second person can code these comments (codes are the child nodes of the parent node which identifies the variable I am measuring). I want to use Choen's Kappa as a measure of inter-coder reliability and from what I understand, in order to get a coefficient for one node across all sources, I do something similar to the excel sheet found at the bottom of NVivo's FAQ here (Also attached). I understand how Cohen's Kappa works mathematically with an example of two coders and two codes. However, I am having trouble understanding what is going on here with the spreadsheet. I just don't want to copy the formulae and placement without understanding what is being done to get an average kappa coefficient. So my question is this: Can someone break down the spreadsheet for me in descriptive terms? Even understanding What Sum EF, TA and TU would be immensely helpful. Pointing to other resources to lead me the right direction would be great too, unfortunately, I don't have any contacts at my university (and it's summer so feedback is nill anyway) that could help. Thanks in advance!
  2. Hello I'm stuck (yet again) on using NV function. I set up an intercoder test and had 2 coders code 48 images(sources) to 27 codes(nodes) and I'm trying to deal with the results of the coding comparison which NV only calculates for every source against every node, resulting in a spreadsheet with 1296 lines of comparison. Exporting the results into excel to try and calculate averages also yields nothing as it continues to return: "ErrorEvaluation of function AVERAGE caused a divide by zero error." I can see formulas [=IF($Q30-$O30=0,1,($P30-$O30)/($Q30-O30))] in the NV example: http://redirect.qsrinternational.com/examples-coding-comparison-nv10-en.htm, however not using Excel very often or being terribly proficient with formulas I'm not sure how to setup and apply a formula to my spreadsheet columns to achieve salient results of Kappa and % agreement? All sources are to be weighted equally. Another issue is when looking at the node reference counts I also can;t see which coder applied which codes to what images. In node view it only shows who coded(modified) what last. I assigned colours to each coder and they also do not show up. Any help would really be appreciated, I'm terribly behind in my work ... Thanks!
  3. Hello I'm trying to figure out the best way to create 2 inter-coder user profile for a coding comparison in order to get the Kappa for my coding structure. I thought I had figured it out by checking "prompt for user on launch" entering a new user then I applied codes to a single node and compared it to my earlier (regular profile) which seems to have worked. However upon closing and reopening NV I see that the coder profile I created is gone; it cannot be found under project info>users, it only shows my original profile there and under options>general. Is this because I didn't save the test coding done? Does NV only save profiles if the profile has modified work on the project? I just want to be absolutely clear on this before proceeding further. So to reiterate, how do I create 2 additional user profiles that will be saved so I can calculate agreement % and Kappa and make sure no data is lost? Thanks.
  4. Hi, I have a dataset with one column and many rows. Each field (or cell) in the dataset contains text that is being coded by two people. I have read the manual and see how to run the comparison query. This apparently compares the nodes based on the number of characters that are selected and added into each node. Instead, I just want to know whether a cell contains any text that has been coded to a node by coder 1 and then compare that to coder 2. In other words, I don't care about the exact text highlighted and added to a node, I just want to know if the field contains any text coded to the node or not, and then compare those. This is hard to explain in writing, but hopefully it makes some sense. Thanks for the help! ___________ Example: Each row of the single column of the dataset represents a survey response. Two coders code are coding for whether the survey response is generally positive or generally negative. I want to see the agreement between coding for positive and negative at the level of the review, not by the exact text selected and coded into the node.
×