Jump to content

Search the Community

Showing results for tags 'coding comparison'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Discussions
    • NVivo Use Cases
    • Collaboration
    • Visualizations
    • Transcription
  • Ask a question
    • NVivo for Mac
    • NVivo for Windows
    • NVivo 12 for Mac and Older Versions
    • NVivo 12 for Windows and Older Versions
  • Make a suggestion
    • NVivo for Mac
    • NVivo for Windows
  • Ethnography's Ethnography
  • Mixed Methods's Mixed Methods
  • Grounded Theory's Grounded Theory
  • Arts Based Research Methods's Art Based Research Methods
  • Coding's Coding
  • Research Team Collaboration's Research Team Collaboration
  • Literature Review's Literature Review
  • Citavi and NVivo Integration - Product Advisory Board's Citavi
  • Citavi and NVivo Integration - Product Advisory Board's NVivo Integration
  • Analyzing Qualitative Research: After the Interview Webinar Series's How-To’s for Data Analysis
  • Analyzing Qualitative Research: After the Interview Webinar Series's Transcribing Qualitative Data
  • Analyzing Qualitative Research: After the Interview Webinar Series's Qualitative Analysis of Cross-Cultural and Cross-Language Data
  • Analyzing Qualitative Research: After the Interview Webinar Series's Analytic Memo Strategies and Reflection
  • Analyzing Qualitative Research: After the Interview Webinar Series's Emergent Analysis
  • Analyzing Qualitative Research: After the Interview Webinar Series's Arts-Based Approaches to Qualitative Research
  • Analyzing Qualitative Research: After the Interview Webinar Series's Writing Up Qualitative Research for a Broad Audience
  • Analyzing Qualitative Research: After the Interview Webinar Series's Using Archived Qualitative Data
  • Analyzing Qualitative Research: After the Interview Webinar Series's NVivo Resources
  • Visual Insights's Ways to use visuals in NVivo
  • UNM CTSC Qualitative Network's General
  • UNM CTSC Qualitative Network's Qualitative Research


  • NVivo Use Cases
  • NVivo Video Clips
  • Research Network Recordings


Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start




Website URL


Research Interests


Job Title

Found 9 results

  1. Can someone give me a quick answer on the following question: If I rename a node, and then use the that newly named note when double coding onto a previously coded document, when I go to run a Coding Comparison query, will it work properly? Similar question for if I merge / combine nodes under a new node name (using copy-paste coded references to new node and then deleting the old node) and then attempt to run a Coding Comparison query with a 1) document coded with an the old (pre-merge) codebook and 2) document with the new codebook? Context: My collaborator partially coded our data set a while ago and I am now learning the codebook, we are making edits to it, and then with that slightly revised codebook I am double-coding documents she already coded in an attempt to get to >70% kappa before moving on to single coding the rest of our dataset myself. We have 3 parent nodes that we merged all child nodes under to simply the codebook and have made a few other small changes to the codebook after various consensus conversations. When I did ran a coding comparison, Nvivo seems to no longer recognize that she also previously coded certain excerpts from the nodes that were merged. Any insight on this appreciated!
  2. Hello: We have two focus groups and two coders. Both coders coded each focus group. I ran a coding comparison between Matt and Amanda, the coders. The results show that there are two cases for each participant and that the coders have 100% agreement on one and then the actual % agreement (e.g., 94%) on the other, duplicate case). Looking at Cases and Case classifications, I see one instance for each participant, as it should be. How did this happen and how can I correct? There are only two files, Focus Group 1 and Focus Group 2 and the participants were either in FG1 or FG2. Thanks, Audrey
  3. Nvivo calculates inter-rater reliability based on how much agreement coders have on a per character basis. That is, how much do they agree to highlight certain sections of the sentence. This is NOT the unit of analysis that I need. I need to conduct a coding comparison based on the codes used per QUESTION or per block/paragraph of text. Has anyone found a work around that allows the unit of analysis to be based on question or block of text INSTEAD of per character?
  4. In nvivo 9, I imported the other coder's project into my project to run an initial coding comparison query. Both our coding was on two transcripts and I selected either coding a family node and then just coding a child node. Each time Nvivo crashes without running the query. Since this project is in its early stage of coding, I need to make sure as the file grows that I can run a coding comparison query at any size. thanks!
  5. Can anyone give me some examples of how they have reported their coding comparison queries? This is my first time running a coding comparison query and I'm not quite sure of conventions for writing up results. Thank you, Nat
  6. Hello I'm stuck (yet again) on using NV function. I set up an intercoder test and had 2 coders code 48 images(sources) to 27 codes(nodes) and I'm trying to deal with the results of the coding comparison which NV only calculates for every source against every node, resulting in a spreadsheet with 1296 lines of comparison. Exporting the results into excel to try and calculate averages also yields nothing as it continues to return: "ErrorEvaluation of function AVERAGE caused a divide by zero error." I can see formulas [=IF($Q30-$O30=0,1,($P30-$O30)/($Q30-O30))] in the NV example: http://redirect.qsrinternational.com/examples-coding-comparison-nv10-en.htm, however not using Excel very often or being terribly proficient with formulas I'm not sure how to setup and apply a formula to my spreadsheet columns to achieve salient results of Kappa and % agreement? All sources are to be weighted equally. Another issue is when looking at the node reference counts I also can;t see which coder applied which codes to what images. In node view it only shows who coded(modified) what last. I assigned colours to each coder and they also do not show up. Any help would really be appreciated, I'm terribly behind in my work ... Thanks!
  7. Hello I'm trying to figure out the best way to create 2 inter-coder user profile for a coding comparison in order to get the Kappa for my coding structure. I thought I had figured it out by checking "prompt for user on launch" entering a new user then I applied codes to a single node and compared it to my earlier (regular profile) which seems to have worked. However upon closing and reopening NV I see that the coder profile I created is gone; it cannot be found under project info>users, it only shows my original profile there and under options>general. Is this because I didn't save the test coding done? Does NV only save profiles if the profile has modified work on the project? I just want to be absolutely clear on this before proceeding further. So to reiterate, how do I create 2 additional user profiles that will be saved so I can calculate agreement % and Kappa and make sure no data is lost? Thanks.
  8. Hi everybody, I have a question regarding coding comparison queries. I find it hard to make sense of percentage agreement. Basically, I've run a coding comparison query to see whether individual codes are reliable. Since NVivo includes negative agreement in the calculation, it is hard for me to judge whether we have achieved an acceptable percentage for the particular code. I know that there is the column "A+ B (%)" , but because this percentage is in relation to the overall source size it is not helpful either. Do you have any suggestions for how I I can calculate positive agreement for individual codes? I hope my question is understandable Thanks in advance, Marie
  9. I coded some data. I then copied the file and gave it to another coder. She coded the same source documents against the same nodes and gave the file back to me. Now what do I do? When I try to look at a coding query it asks me to compare between User Group A and User Group B. What does that mean? Regardless of what it means there is only one userid available for selection in both A and B and that is mine. I've also looked in File/Info/Project Properties and there is a tab there called Users. It only has one userid in it as well. How do I see/compare the work of the other coder? Thanks.
  • Create New...

Important Information

Privacy Policy