Jump to content

Search the Community

Showing results for tags 'kappa'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Discussions
    • NVivo Use Cases
    • Collaboration
    • Visualizations
    • Transcription
  • Ask a question
    • NVivo for Mac
    • NVivo for Windows
    • NVivo 12 for Mac and Older Versions
    • NVivo 12 for Windows and Older Versions
  • Make a suggestion
    • NVivo for Mac
    • NVivo for Windows
  • Ethnography's Ethnography
  • Mixed Methods's Mixed Methods
  • Grounded Theory's Grounded Theory
  • Arts Based Research Methods's Art Based Research Methods
  • Coding's Coding
  • Research Team Collaboration's Research Team Collaboration
  • Literature Review's Literature Review
  • Citavi and NVivo Integration - Product Advisory Board's Citavi
  • Citavi and NVivo Integration - Product Advisory Board's NVivo Integration
  • Analyzing Qualitative Research: After the Interview Webinar Series's How-To’s for Data Analysis
  • Analyzing Qualitative Research: After the Interview Webinar Series's Transcribing Qualitative Data
  • Analyzing Qualitative Research: After the Interview Webinar Series's Qualitative Analysis of Cross-Cultural and Cross-Language Data
  • Analyzing Qualitative Research: After the Interview Webinar Series's Analytic Memo Strategies and Reflection
  • Analyzing Qualitative Research: After the Interview Webinar Series's Emergent Analysis
  • Analyzing Qualitative Research: After the Interview Webinar Series's Arts-Based Approaches to Qualitative Research
  • Analyzing Qualitative Research: After the Interview Webinar Series's Writing Up Qualitative Research for a Broad Audience
  • Analyzing Qualitative Research: After the Interview Webinar Series's Using Archived Qualitative Data
  • Analyzing Qualitative Research: After the Interview Webinar Series's NVivo Resources
  • Visual Insights's Ways to use visuals in NVivo
  • UNM CTSC Qualitative Network's General
  • UNM CTSC Qualitative Network's Qualitative Research

Categories

  • NVivo Use Cases
  • NVivo Video Clips
  • Research Network Recordings

Blogs


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Email


Website URL


Location


Research Interests


Organization


Job Title

Found 5 results

  1. I'm doing a qualitative analysis on survey data, my source file is in excel format. I have coded the whole dataset, and I will have a second coder to code 25% of it so we can have an intercoder reliability measure (the Cohen's Kappa value). We ran a trial in which the other coder coded a few answers and I ran a coding comparison query to see how that goes. Our percentage of agreement was pretty high (99%) but our Kappa value was quite low (0.13) indicating poor agreement. I understand that is "Because the Kappa coefficient calculation takes into account the likelihood of the agreement between users occurring by chance, the value of Kappa can be low even though the percentage agreement is high."(source: https://help-nv11mac.qsrinternational.com/desktop/procedures/run_a_coding_comparison_query.htm#MiniTOCBookMark6) However, I worry that will still be a problem given I coded the whole dataset and a second coder only 25% of it, is there a way I can tell NVivo to use only the 25% of the dataset (only the parts of the data that were coded by both users) to calculate the kappa value? Or is there a way I can "delete" 75% of the data (which was imported to NVivo as a spreadsheet) so that the 25% we want to look at is understood as 100% by the coding comparison query? Also, how can the other coder code the data without seeing my code? I understand they can only omit the coding stripes, but that means that they can't also see what they have coded!
  2. Hi, I was wondering if the community can help me understand something. I am trying to finish my thesis which involves a content analysis of a subsample of roughly 104 public comments. I developed a codebook with which myself and a second person can code these comments (codes are the child nodes of the parent node which identifies the variable I am measuring). I want to use Choen's Kappa as a measure of inter-coder reliability and from what I understand, in order to get a coefficient for one node across all sources, I do something similar to the excel sheet found at the bottom of NVivo's FAQ here (Also attached). I understand how Cohen's Kappa works mathematically with an example of two coders and two codes. However, I am having trouble understanding what is going on here with the spreadsheet. I just don't want to copy the formulae and placement without understanding what is being done to get an average kappa coefficient. So my question is this: Can someone break down the spreadsheet for me in descriptive terms? Even understanding What Sum EF, TA and TU would be immensely helpful. Pointing to other resources to lead me the right direction would be great too, unfortunately, I don't have any contacts at my university (and it's summer so feedback is nill anyway) that could help. Thanks in advance!
  3. Hello I'm stuck (yet again) on using NV function. I set up an intercoder test and had 2 coders code 48 images(sources) to 27 codes(nodes) and I'm trying to deal with the results of the coding comparison which NV only calculates for every source against every node, resulting in a spreadsheet with 1296 lines of comparison. Exporting the results into excel to try and calculate averages also yields nothing as it continues to return: "ErrorEvaluation of function AVERAGE caused a divide by zero error." I can see formulas [=IF($Q30-$O30=0,1,($P30-$O30)/($Q30-O30))] in the NV example: http://redirect.qsrinternational.com/examples-coding-comparison-nv10-en.htm, however not using Excel very often or being terribly proficient with formulas I'm not sure how to setup and apply a formula to my spreadsheet columns to achieve salient results of Kappa and % agreement? All sources are to be weighted equally. Another issue is when looking at the node reference counts I also can;t see which coder applied which codes to what images. In node view it only shows who coded(modified) what last. I assigned colours to each coder and they also do not show up. Any help would really be appreciated, I'm terribly behind in my work ... Thanks!
  4. Hello I'm trying to figure out the best way to create 2 inter-coder user profile for a coding comparison in order to get the Kappa for my coding structure. I thought I had figured it out by checking "prompt for user on launch" entering a new user then I applied codes to a single node and compared it to my earlier (regular profile) which seems to have worked. However upon closing and reopening NV I see that the coder profile I created is gone; it cannot be found under project info>users, it only shows my original profile there and under options>general. Is this because I didn't save the test coding done? Does NV only save profiles if the profile has modified work on the project? I just want to be absolutely clear on this before proceeding further. So to reiterate, how do I create 2 additional user profiles that will be saved so I can calculate agreement % and Kappa and make sure no data is lost? Thanks.
  5. Hi, I have a dataset with one column and many rows. Each field (or cell) in the dataset contains text that is being coded by two people. I have read the manual and see how to run the comparison query. This apparently compares the nodes based on the number of characters that are selected and added into each node. Instead, I just want to know whether a cell contains any text that has been coded to a node by coder 1 and then compare that to coder 2. In other words, I don't care about the exact text highlighted and added to a node, I just want to know if the field contains any text coded to the node or not, and then compare those. This is hard to explain in writing, but hopefully it makes some sense. Thanks for the help! ___________ Example: Each row of the single column of the dataset represents a survey response. Two coders code are coding for whether the survey response is generally positive or generally negative. I want to see the agreement between coding for positive and negative at the level of the review, not by the exact text selected and coded into the node.
×
×
  • Create New...

Important Information

Privacy Policy