Jump to content

Search the Community

Showing results for tags 'coding comparison'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Welcome
  • Discussions
    • NVivo Use Cases
    • Collaboration
    • Visualizations
    • Transcription
  • Ask a question
    • NVivo for Mac
    • NVivo for Windows
    • NVivo 12 for Mac and Older Versions
    • NVivo 12 for Windows and Older Versions
  • Make a suggestion
    • NVivo for Mac
    • NVivo for Windows
  • Mixed Methods's Topics
  • Coding's Thematic Coding
  • Research Team Collaboration's Topics
  • Literature Review's Challenges analyzing literature
  • Citavi and NVivo Integration - Product Advisory Board's Citavi
  • Citavi and NVivo Integration - Product Advisory Board's NVivo Integration

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Email


Website URL


Location


Research Interests


Organization


Job Title

Found 7 results

  1. Nvivo calculates inter-rater reliability based on how much agreement coders have on a per character basis. That is, how much do they agree to highlight certain sections of the sentence. This is NOT the unit of analysis that I need. I need to conduct a coding comparison based on the codes used per QUESTION or per block/paragraph of text. Has anyone found a work around that allows the unit of analysis to be based on question or block of text INSTEAD of per character?
  2. In nvivo 9, I imported the other coder's project into my project to run an initial coding comparison query. Both our coding was on two transcripts and I selected either coding a family node and then just coding a child node. Each time Nvivo crashes without running the query. Since this project is in its early stage of coding, I need to make sure as the file grows that I can run a coding comparison query at any size. thanks!
  3. Can anyone give me some examples of how they have reported their coding comparison queries? This is my first time running a coding comparison query and I'm not quite sure of conventions for writing up results. Thank you, Nat
  4. Hello I'm stuck (yet again) on using NV function. I set up an intercoder test and had 2 coders code 48 images(sources) to 27 codes(nodes) and I'm trying to deal with the results of the coding comparison which NV only calculates for every source against every node, resulting in a spreadsheet with 1296 lines of comparison. Exporting the results into excel to try and calculate averages also yields nothing as it continues to return: "ErrorEvaluation of function AVERAGE caused a divide by zero error." I can see formulas [=IF($Q30-$O30=0,1,($P30-$O30)/($Q30-O30))] in the NV exampl
  5. Hello I'm trying to figure out the best way to create 2 inter-coder user profile for a coding comparison in order to get the Kappa for my coding structure. I thought I had figured it out by checking "prompt for user on launch" entering a new user then I applied codes to a single node and compared it to my earlier (regular profile) which seems to have worked. However upon closing and reopening NV I see that the coder profile I created is gone; it cannot be found under project info>users, it only shows my original profile there and under options>general. Is this because I didn
  6. Hi everybody, I have a question regarding coding comparison queries. I find it hard to make sense of percentage agreement. Basically, I've run a coding comparison query to see whether individual codes are reliable. Since NVivo includes negative agreement in the calculation, it is hard for me to judge whether we have achieved an acceptable percentage for the particular code. I know that there is the column "A+ B (%)" , but because this percentage is in relation to the overall source size it is not helpful either. Do you have any suggestions for how I I can calculate positive agreement
  7. I coded some data. I then copied the file and gave it to another coder. She coded the same source documents against the same nodes and gave the file back to me. Now what do I do? When I try to look at a coding query it asks me to compare between User Group A and User Group B. What does that mean? Regardless of what it means there is only one userid available for selection in both A and B and that is mine. I've also looked in File/Info/Project Properties and there is a tab there called Users. It only has one userid in it as well. How do I see/compare the work of the other co
×
×
  • Create New...