Jump to content

Administrator

Admin
  • Content Count

    404
  • Joined

  • Last visited

Posts posted by Administrator

  1. From: Kevin Sayers

    Posted: Wed 27/03/2002

     

    I have a rather small project I am working on. I am travelling between two machines at two sites and would like to house the project on a 100 MB zip disk rather than occupy space on two machines. I have had problems placing on zip disks before (in that, getting the second machine to recognize the project file), is there a resource or instruction sheet anyone could recommend for doing this?

     

    From: Pat Bazeley

    Posted: Wed 27/03/2002

     

    There's no problem in working from a zip disk, with the only proviso being the speed of the connection between your zip and your computer. Once you have opened the project on a particular computer, NVivo will recognise it for next time, including the location. If you choose instead to work on the hard drive and use the zip to copy from one computer to another, make sure you have a folder on the zip to copy the project into. NVivo will not allow you to "save as" directly to a drive - it must be to a folder. do check out the general guidance on backing up which is on the FAQ section of the qsr website (http://www.qsrinternational.com)

  2. From: Julie Evans

    Posted: Wed 30/10/2002

     

    Has anybody used the command assistant in N6 to assess coder reliability?

     

    Three months ago I set two coders off on an independent coding task in a large dataset with the expectation of using N6's automated procedure to assess their level of coding agreement and to easily isolate passages of text where their coding disagreed. Indeed I upgraded to N6 specifically for this purpose. Before I set my coders to work, I testedthe procedure in N6 using one document and three nodes, to make sure itworked and that I understood what it was doing. It worked perfectly.

     

    My coders have now finished their coding of 1011 (short) documents to atree that contains 100 nodes arranged in 15 branches. Imagine my disappointment when N6 crashed when I attempted to build the command file using the command assistant. I tried repeating it using only 20 documents and it crashed again. I then tried it with only one document.N6 still could not write the commands to the command assistant, but it did offer to save them to a command file, which I successfully ran fromthe Project menu. However, the matrix table generated would not displaymore than the first 17 rows (equivalent to only 5 and a half of my 100 nodes). I was relying on clicking on the cells in the table to quickly find the passages of text that had been coded differently by the two coders. I know this information is also contained within the reports generated by the procedure, but these passages are still not easy to find because so much unnecessary information is also generated. It is easier to find them by requesting a document report with cross-references without doing the coder reliability procedure at all.

     

    I am now going to have to revert to a manual process of examining each document individually to find the passages of discrepant coding, whereas I could have asked my coders to discuss their coding as they went along to ensure complete agreement.

     

    Has anyone else experienced difficulties with this procedure, or am I missing something?

     

    From: Shyanika Wijesinha Rose

    Posted: Wed 30/10/2002

     

    I found a similar constraint and used the following work around. It's a little complicated, but it does work. Run a command file with a search-index-system command of JUST ONE for each parallel node between the two coders and create a new node with only the coding that was missed.

     

    This will create a node that has only the references to text that only one coder coded. The command file below can be generated using the merge function in word if you have a file that has your node addresses <<node>> and a list of sequential numbers <<number>>. In this particular command file 100 1 is the base node for coder 1, and 100 2 is the base node for coder 2 (you can change this to your base nodes).

     

    It will save the missed nodes at (300 1) (300 2), etc. (you can change this if you already have coding at node 300. Then you can browse all the nodes under 300 to see the text missed for each node.

     

    (search-index-system

    (JUST-ONE (100 1 "Node") (100 2 "Node"))

    node (300 "Number")

    node-title "Miss "Node""

    )

  3. From: Lizzie Bellarby

    Posted: Wed 21/08/2002

     

    Please will you email any articles or links you may have relating to NVivo and GTM. I thought I found a reference by Lyn Richards called 'what's in a name?' and I can't find it anywhere. It was dated 1999. Does anyone have an online copy of it please?

     

    From: Lyn Richards

    Posted: Wed 21/08/2002

     

    Liz, I've never found time to publish my thinking about what, in a paper to BSA way back in 1999, I called The Grounded Red Herring.

     

    The BSA paper is of historic relevance only for your work, since it way predates NVivo. But of course the thinking in it is relevant to the design for NVivo. You'll find GT techniques mentioned thruout Using NVivo - when I'm talking about use of free nodes, for example, or dimensionalising concepts by creating new finer categories during coding on from a node as the category develops, or modelling. These are the techniques - particularly the theory-refining ones - that software can help greatly. And of course NVivo is particularly influenced by the need for such techniques (and as someone commented on the Forum recently, the name is inter alia a thankyou to Strauss for his friendship and glowing interest in the software challenge.

     

    But there is a wider debate (or there was!) about whether qualitative software in general, or QSR's in particular, is skewed towards support for Grounded Theory work. This has puzzled me for years, though it's now a truncated debate, last heard of in the brief discussion in Sociological Research Online. See Coffey, A, Holbrook B and Atkinson, P,(1996) Qualitative Data Analysis: Technologies and Representations, Sociological Research Online, vol 1 no 1. http://www.socresonline.org.uk/socresonline

     

    In the 1999 paper I wrote that the debate:

    ... requires attention, both because it renewed earlier assumptions about the homogeneity and common purpose of software, and because it introduced new assumptions about coding. First, homogeneity. Reviving a paper in the 1992 conference by Lonkila, Coffey and Atkinson saw the danger in "the unnecessarily close equation of grounded theory, coding and software" (1996, 7.5). Like Lonkila, they were at pains to distinguish actual use and software function form association.

    Grounded theorizing is more than coding, and software can be used to do more than code-and-retrieve textual data. The point does not concern the full potential of CAQDAS, nor the true nature of grounded theorizing; rather the danger we identify lies in the glib association between the two, linked by an emphasis on data coding procedures. (1996, 7.5).

     

    Part of this argument... is based on concern that coding can be seen as an end in itself. But part is based on two quite different sets of concerns;

    - that this method is linked with grounded theory method in (somebody's) "glib association"

    - that QDA software is homogeneous and homogeneously promoting "as an industry-wide gold standard" the "elementary set of assumptions and procedures for the organization and management of qualitative data" of code and retrieve method.

     

    It's no secret that I have for years expressed concerns about the ways that software seduces us into overreliance on coding for retrieval, but this adds another puzzle, since GT does not rely on coding of this sort.

     

    In the 1999 paper I went on to puzzle about the claimed GT link - to summarize, neither of the "founders" of the now very divided GT method ever used qual software, (though Strauss was very generous with his time and thoughts on the subject):

    - supporting the various techniques of GT is a real challenge for software

    - software does a great number of things for qual research that GT researchers don't want to do!

     

    - so whilst several developers, including QSR, have paid great attention to the needs of GT research, the equation of GT with qual software is pretty strange. My own understanding and use of GT methods especially led me to the conclusion that code and retrieve methods, the apparent reason for this association, had very little affinity with GT techniques.

     

    Hope that helps. The paper's not even on the QSR website but I promise - so long as *nobody* writes the Forum asking for it! - that we will get it up there when Ted comes back from holidays and Sue and I have a moment free here at the Help Desk!

  4. From: Corinne Nyquist

    Posted: Thu 14/02/2002

     

    I have just loaded Nudist 4 and have just begun my interviews for my PhD project. I have just had done transcribed from the tape into Microsoft Word by someone I hired. Are there any things I should do or change now to prepare for using Nudist later?

     

    From: Pat Bazeley

    Posted: Thu 14/02/2002

     

    Your query is rather too general for someone to be able to effectively answer it. It sounds like you would benefit, however, from either some training or at last some use of the self-teaching materials for N4 which are available for download from the QSR web site, otherwise you are going to not be aware of small procedures which will save you many hours of work and/or agony.

  5. From: Chari Fuerstenau

    Posted: Wed 20/11/2002

     

    Can anyone suggest a method that will help us split a word file into separate documents for import into N6?

     

    Our qualitative responses to a survey come to us in one huge textfile. The document name is on the first line, followed (I believe) by a hardreturn, then the transcribed response. There are thousands of these, and the responses to four separate questions are all in the same file (with no individual question headers, of course).

     

    Thanks to this group, I've learned how to merge the data from spreadsheets into Word and then split it into files, but I can't seem to make any headway on this.

     

    If it helps, here's basically what each entry looks like:

     

    54076:

    response response response response response response response response response response response response response response response response response response response response response response response response response response

     

    54088:

    response response response response response response response response response response response response

     

    Can anyone offer a suggestion? I'm willing to try about anything!

     

    From: Pat Bazeley

    Posted: Wed 20/11/2002

     

    The clue is to look for a common feature that indicates either an end of file or a beginning of file. If they are as you have them below, for example, the double hard return could be used to distinguish respondents (assuming this is the only place it is used) - or you could search for a : (if the id number is the only place that is used). You would then create a macro of the type you mention below (i.e. the one in my notes) but tell it to look for the double hard return (or :) rather than a section break. So, to do this ...

     

    With the document open and your cursor at the beginning, indicate to Word that you want to Record a macro. Give it a name (no spaces) and say OK. A recording icon will appear on your screen. Use the Find menu to find the identifying feature (if the double hard return, it would be Find: ^p^p; if the colon you would need to do Find Next twice). If the feature is to be included as the end of the first document, then press the right arrow key, if it is the beginning of the next, press the left arrow key (or Home if you have to use the : so it takes you to the beginning of that line). Then do a Shift-Ctrl-Home to select all the text for the first split document.

     

    Choose Cut, click on the new document icon in the Word toolbar, and Paste. Click on Save, choose to save as text only, and close. Your cursor will now be waiting at the top of the original document, ready for the next split. Stop recording at this point. You then need to Edit the macro to remove the document name where it says Save As (so that Word will automatically provide a name based on the first line of text - in your case, the respondent number rather than continually overwrite the one file).

     

    You are then ready to run the macro, but in order to run the macro sufficient times to split the whole file, you need to use the Repeat macro that is in my notes - just modify it so that it refers to the new one you've written (whatever you've named it) and the correct number of times it has to repeat (i.e. N of respondents).

    Don't save the changes to the original long document (for safety!).

     

    For creating text only files for N6 the fact that it (annoyingly) puts a .doc extension on each file is not a problem - N6 will still recognise them as being plain text files and import them OK (the equivalent for NVivo is a problem and you then use DOS to change the extensions). It makes life easier if you pre-set Word to send all the new files into the rawfiles folder for your project (in Options) - don't forget to change it back again afterwards!

     

    For those of you who want more info on this kind of data manipulation (and the text of the Repeat macro!), check the info under Research Notes on my web site.

  6. From: Lesley Doyle

    Posted: Mon 4/02/2002

     

    I have followed the questions/answers on numbers of documents to use with NVivo with interest.

     

    What is the ideal, and what is the maximum, number of documents that NVivo can handle comfortably ie without 'clogging up' ? Is it the number of documents, rather than the coding, which prescribes the limits of its capacity?

     

    What are the limits to the amount of coding, ie nodes, that can be created and used efficiently with the program? I have noticed that the use of, for example, the matrix intersection, creates large numbers of extra nodes which is worrying when the number of nodes is already in the hundreds with much more to go.

     

    Incidentally, one thing I have found NVivo doesn't like is fast scrolling up and down a document or node. It doesn't seem to be able to keep up. Having more than two or three documents/nodes open at the same time creates the same problem. It prefers a clear desk (always something to aspire to, if rarely achieved!).

     

    From: Pat Bazeley

    Posted: Mon 4/02/2002

     

    The QSR people can no doubt give you a more technical answer (indeed, there may even be one on the web site), but here's a couple of points about how the NVivo works (and how you might work with it) to think about meanwhile:

     

    The number and size of documents is a major component in the size of a project in terms of memory consumed while working with the data.

     

    The number of nodes has much less impact because what is saved at nodes is simply a reference to the documents, not any actual text.

     

    Multimedia files (e.g. as databites) are very demanding when it comes to backing up.

     

    Whenever you run a search, the program has to open and close each document - so that takes time. For a complex matrix search, it can be quite a lot of time if there are multiple cells and a lot of documents.

     

    If you're running a lot of matrix searches you will, as you suggest, very quickly build a huge set of nodes - and that can slow up the operation of the node explorer, apart from anything else. While search results are saved as nodes, it is not intended that you keep multiple sets of such nodes forever -- try to get into the pattern of running a search and dealing with the results (which means recording your interpretation of them and any essential bits and pieces from them - e.g. in a project journal) before going on to the next search. This is good practice in any case, but it also means you can delete those results before you create more, though on rare occasions with particularly critical results you may wish to store them - in which case, move them into another place in the tree nodes so you can do a routine clean up of all nodes under Search Results on a regular basis.

     

    For those considering working on projects with large numbers of documents, N5 is definitely the way to go. This is because the documents are in plain text which is much less demanding on memory - also structured text units that are not so readily modifiable as in NVivo helps speed things up. There are other advantages to using N5 for such projects as well - typically the documents (and/or the

    analysis) have some structure which can allow the user to take advantage of the command file capacity of the program to automate some functions. Nodes serve the functions of both attributes and sets (as well, of course, as for coding) which can make life simpler as well, and (a final very small point!) if you're "processing" your way through a lot of data, text units make it easier to rapidly select text for coding (use the down arrow key, for example) and also provide a more efficient boundary for wildcard searches than does the free-flowing text of NVivo.

    NVivo, of course, is just great if you're working in detail with a more manageable amount of text - the sort of volume one might expect in a "truly qualitative" project.

     

    From: Lioness Ayres

    Posted: Mon 4/02/2002

     

    Can't resist this one . . . .

    What is a "truly qualitative" project? What kinds of projects are "qualitative" but not "truly," setting aside mixed methods projects? Do projects that re "qualitative" but not "truly" have less merit than those that are "truly" qualitative? How are the two kinds of projects evaluated?

     

    And while I'm on the subject, how do people distinguish, if at all, "qualitative research" and "interpretive research" and does this distinction have some impact on the issue of being "truly" qualitative?

     

    From: Sarah Delaney

    Posted: Mon 4/02/2002

     

    oh my god!! you've just asked one of those deep and meaningful questions that send us all into quivers - post it to qual-soft as well and see what clive seale says (he has a lot to say on this topic)

     

    have to think about this one, but short thought is that a 'truly' qualitative approach is a contradiction in terms, seeing as qual is normally based on the phenomenonenonenonological paradigm that holds that there is no objective, measurable truth outside human interpretation. interpretation is a big thing in qual...

     

    but its much more complex than that and if i didn't have to work i would say more..

     

    From: R. Allan Reese

    Posted: Mon 4/02/2002

     

    In view of opinions expressed at times on this list, how dare people attempt to quantify qualitative research!

  7. From: Jenny Murray

    Posted: Tue 21/05/2002

     

    I am using Nud*ist4. I have been trying to have a look at my coding using coding stripes and cross-references. However, with coding stripes for example,it only tells you if a specific text unit has been coded under one or more categories.

     

    What I need to look at is if a document that is coded at one category is also coded at another specific category. By selecting the category and making a report with coding stripes including the other category of interest, I only get a hit if the same text unit was coded at both categories. How do I find out if the person (i.e. document) coded at one category is also coded at another. Hope this makes sense. Not everything in black and white does!

     

    From: Pat Bazeley

    Posted: Tue 21/05/2002

     

    Two possibilities here - one quick and one more thorough:

    1) Highlight the whole document (Ctrl A) and click X on your keyboard to get a quick overview of all the nodes for which your document has coding

    2) Use the Proximity (Near) search operator and specify the context as Document. This will allow you to find all the text coded at node A and at node B provided both are in the same document. (You might then want to use Excluding-docs-from to see what is said for Node A in documents that are not coded at Node B).

  8. From: Joao Vieira da Cunha

    Posted: Thu 7/03/2002

     

    I have a couple of thousand docs in a NVIVO project , with several attributes, including date. I want to make a MSWord document with these documents sorted by date. Is there any way to do a search in NVIVO whose outcome would be a report with these documents aranged by date?

     

    From: Pat Bazeley

    Posted: Thu 7/03/2002

     

    I guess you've figured by the lack of responses so far that there isn't a real easy answer to this one, although it is logical to expect that there really should be. You can do an attribute lookup and find all documents meeting the criteria set - but that would be painful for a long series of dates.

     

    The best solution I can think of is to create a matrix intersection of a node that codes all text by all the values of the date attribute. To achieve this you will need to create the node. Make a new node somewhere in your coding system (it could be a free node) for "all docs". In the search tool, set the scope to All documents (the default), and then click on Save current scope as Node (just at the bottom of the scoping section on the right side of the dialogue) and choose the node you have just made. Then do the matrix intersection, using this node. (The alternative to using a node would be to use a text pattern searching for something like "the" - a word that appears in all docs - with the results spread to enclosing documents.) A problem with this method is that NVivo is likely to scramble the order of the dates in the resulting display. Also, you will need to extract the data one cell at a time for reporting purposes (if preserving formatting is not an issue, the quickest way to get it all into a report is to browse then copy and paste into Word, just making sure you put a heading in before pasting to indicate the relevant attributes of the bit you are about to paste) - at least doing that allows you to extract them in the right order!

  9. From: Keira Armstrong

    Posted: Thu 29/08/2002

     

    Does anyone know of ANY published studies (or web posted articles) that report calculating kappa statistics on coding of focus group data?

     

    Does anyone have an opinion about using kappas for data collected as a part of formative research?

     

    Does this seem like a useful step?

     

    From: Jens Seeberg

    Posted: Thu 29/08/2002

     

    The use of statistical tests on quantitative aspects of qualitative data generated by programs such as Nvivo and Nudist generally calls for great caution. In the example, you mention, a Kappa test is intended to measure, whether outcome is different from what could be expected by chance. This may be a useful instrument, if the participants do not influence each others' responses. However, in a focus group setting, you would expect a (at times quite strong) normative process, sometimes resulting in consensus in the group (depending on a host of factors, composition, topic, etc.) - i.e. a non-random process. FGDs - and qualitative methods in general - do not and should not adhere to the rules of good practices for quantitative research. Ignoring this may result in flawed statistics and under-analysis of the qualitative material.

     

    What I have found useful in the quantitative functionalities of the programs is to be able to get a quick look at the patterning of certain topics, not as a way of generating 'results', but as one of a range of ways to identify relevant questions that can be helpful for the qualitative analysis.

     

    It would be interesting to get others' views on this issue...

     

    From: Tony Gallagher

    Posted: Thu 29/08/2002

     

    I would agree with Jens, to a large extent. When I run focus groups I try to make clear to the participants that there is no expectation of consensus on any of the issues and that dissent from a predominant view is entirely acceptable. The numbers involved in focus groups tend to be small, overall, which limits the value of collecting quantitative data for statistical analysis. That said, a lot of small-scale experimental and survey studies using small numbers often pay scant regard to sampling requirements.

     

    When the participants in a focus group show a high degree of variation on an issue that suggests to me that more focus groups are needed, in order to examine and clarify the way in which the issue is understood. On the other hand, if they display a high degree of genuine consensus then this implies that the number of groups need not be that high.

     

    In the end, however, I'm not sure that the questionable claim to precision that some statistical testing would provide is anything like as interesting as the insights to be gained from an analysis of the transcripts.

     

    Thats why I'd rather spend the time talking and listening to a focus group than getting them to fill in some questionnaires for me to run data through SPSS!

  10. From: Samaa Attia

    Posted: Tue 6/08/2002

     

    I used N4 to analyse Transcript ed interviews. Now, i want to use the same nodes that i created to analyse secondary data in terms of articles and other documents.

     

    The problems is, when i try to "open" the the other files in the same project of the transcripts, it gives me a message which says that the file is too big (as it contains hyper links and was imported mainly from the interne)and therefore, i am not able to work on those secondary data.

     

    I do not want to create another file with nodes as this will take ages, at the same time, i might face the same problem (it is a big file). I tried to close all other programs and files to clear the memory for those files, but still no luck. Knowing that is it not one file, they are several but separated ones

     

    What can i do?

     

    From: Leonie Daws

    Posted: Tue 6/08/2002

     

    N4 can only work with plain text files. It cannot import things like hyperlinks. This may be your problem. N4 has no trouble importing large documents but it will only show you part of the text when you try to browse a large document - there is a facility to enable you to scroll down to later text that is not immediately visible in the browser.

     

    If you want to use the existing nodes, but do not need to compare findings with the original documents, you could save the original project under a different name, open up the new project and delete the old documents.

     

    This will leave the node structure in place and you can import the new set of documents into this copy of the project. But do make sure youhave a backup of your original project stored away in a safe place before you experiment.

  11. From: Mark R. Nelson

    Posted: Fri 21/06/2002

     

    Situation:

    I am working on a project with multiple coders/raters. We have a pre-specified set of concepts (nodes/codes) that each rater is using against a set of documents. We have set up duplicate documents and each rater has gone through and coded the documents using the pre-defined concepts (nodes/codes). What we now want to determine is how consistent each of the raters were in coding each of the pre-determined codesagainst the data set. (We are using NVivo Merge, so I have the ability to havethe nodes show up with different names to distinguish as to which ratersused which nodes on different passages/documents).

     

    Question:

    Is there a query or set of queries to test or compare the documents for consistency of coding that might be more effective than just visual inspection of the files (as the manual suggests)?

     

    From: Sylvain Bourdon

    Posted: Fri 21/06/2002

     

    The bad news is that, up to now, the best way to do interrater reliability checks is with the Nud*Ist familly of software - and mostly with N6 - and not with NVivo. The command language - and, in N6, the Command assistant - make this a very quick and precise operation.

     

    This is not to say you are condemned to suffer in silence if you are using NVivo (which has tons of other advantages, obviously!). It only means more «manual» work, that's all. This is what you can do once your projects are merged:

     

    If your coding structures in the two documents are the same, you don't need to rename all the nodes. You only need to rename the Parent node of your coding structure for Merge to think they are different:

     

    Mark

    Code A

    Code B

    Code Bx

     

    Julia

    Code A

    Code B

    Code Bx

     

    Provided that you didn't make any changes to your documents in either projects, they will merge into a single document whith both project's coding attached to them. Once the merge is done, you can obtain the difference in coding for each node by doing two Matrix difference Searches. In one of them you Enter all of Mark's nodes in the top dialog part of the dialog box and all of Julia's mirror nodes in the bottom dialog box of the Matrix Difference (you could split this if you have a large number of nodes). This will give you a matrix where the cells on the diagonal will code what has been coded by Mark but not by Julia at each node. You then redo the same operation but inversing the top and bottom dialog boxes, which will result in a diagonal where the cells code what has been coded by Julia but not by Mark at each node. You can then get counts and inspect every cell as you wish.

     

    This is a dab less elegant - and less flexible - than the Command file or the N6 routines but it works. Hope it is usefull to you.

     

    For a complete description of the logic behind those functions now integrated in N6, you can always see my paper (Bourdon, S. (2000) Inter-Coder Reliability Verification Using QSR NUD*IST - a paper delivered at the Strategies In Qualitative Research conference in September 2000 at the Institute of Education, London.) available in the Online Resources section of QSR's site

     

    From: Mark R. Nelson

    Posted: Fri 21/06/2002

     

    Thank you so much for your response. It was quite helpful and confirmed my suspicions. I did indeed set up the coding structures the same and have unchanged/identical documents. I think the piece I fell short on is in doing the second Matrix Difference as an inversion of the first, although now that you say it, it makes perfect sense. I have not used the Command File aspect of NVivo yet, although I do know it will help with some of the things I would like to accomplish. I switched/upgraded from N4 to Nvivo this year, which has been a bit of an adaptation. I had considered N6, but went with NVivo due to some of the characteristics of my data set (mostly historical project documents rather than interviews or just text).

     

    Thank you too for your paper reference. I searched for just such a paper but found no references

  12. From: Carl Cuneo

    Posted: Tue 27/08/2002

     

    Am I mistaken in assuming that nodelinks, doclinks, and databits are supposed to be active in Nvivo 2? Are you not supposed to be able to launch external software and files (audio, video), or an external web site, by clicking on the link? The linked text is green and underlined, butnothing happens when I drag my mouse over it; it does not become active. Am I creating the links incorrectly? Any help would be appreciated.

     

    From: Sylvain Bourdon

    Posted: Tue 27/08/2002

     

    No, you are not mistaken; the underlined text actually shows a link.

    But, unlike within the web browsers, you need to use the local menu called by the right mouse button to follow the link (i.e. InspectDataBite)

  13. From: Corey Colyer

    Posted: Wed 12/06/2002

     

    Using N4 (or 5,6)...

     

    Has anyone developed a command file to automagically add indexing withan IF-Then logic? What I'd like to do is index any text coded at (3 3 2 1) to also becoded at (3 3 2), (3 3), and (3).

     

    When interactively coding I've been adding the indexes directly to thelocation where they fit in my budding tree. But, since children are of the class of theirparent...I want access to that text when doing a more loose index search....

     

    So to flesh out my example here, if my node (3) is Roles, and (3 3) isEvaluator, and (3 3 2) is Financial, Text coded at (3 3 2) implies that an actor, in her evaluator role madesome consideration on financial criteria. I coded this interactively while browsing a document. Ididn't want to take time to also code (3 3) and (3). [i have some trees that are getting quite dense]. Lateron I may want to do an index search which takes all Evaluator types into consideration. Can I easily rollthose children into the parent node?

     

    I'm sure there's a way to do this, but I haven't figured it out yet. Any ideas?

     

    From: Sylvain Bourdon

    Posted: Wed 12/06/2002

     

    What you need is the Collect search operator. It does exactly what you want, which is collecting all coding of the child nodes of a certain node into a new node. For example, Collect on node (3) will gather all the stuff coded at (3 3 2 1), (3 3 2), (3 3), etc. into a new node. Once this node is created, you can then cut it and merge it's content with the parent node (here, node (3)).

     

    You can use the Collect operator interactively or combine many Collects in a command file (N6 Command assistant makes this quite easy).

  14. From: Lesley Doyle

    Posted: Wed 6/02/2002

     

    Whilst I appreciate the points being made in the qual/quan debate, I agree with Pat. For the reader, it is helpful to know, when the sample is large, whether it was just one person responding in a particular way or, say, 50% even though in qualitative analysis one is trying, among other things, to present the issue as seen by the respondents and cannot then use numbers to 'prove' anything generally applicable outside of the project. Using numbers to give a general picture plus Sarah's point about putting a matrix as an appendix for the detail seems to me to cover the problem nicely. Hence my particular conundrum.

     

    I am using the matrix intersection with success, in the way Ann described, but have a problem with using it for one of the things I want to do as follows:

     

    I want to say how many respondents from my sample responded in a general way to a particular issue. eg from a sample of 132, X expressed some dissatisfaction. I did not, in my original coding, have them all in the same node ie dissatisfaction, to code on into the specific nodes for each type of dissatisfaction because my decision to use dissatisfaction as a focus emerged from my original coding(grounded theory, I believe!). In order to produce what I want from the matrix intersection, what I have been doing is coding all my specific responses into the general node (obviously as well as retaining the specific nodes) so I can do a matrix intersection which gives me a clear view of which respondents responded overall in the general way ie expressed dissatisfaction. This is time-consuming and creates a very large node. If I do the intersection using all the specific nodes I get a lot of repetition of documents and therefore meaningless numbers in terms of the general because respondents often express more than one type of dissatisfaction and therefore appear in several nodes. I should mention that I have a large number of respondents with more than one document and I have used the attributes to enable me to identify these. I can do a matrix intersection, which shows me where I have respondents who have responded twice. This allows me to eliminate the duplicate responses within a node.

     

    Is there a quicker way to identify the number of those respondents who expressed dissatisfaction than putting all my specific responses into one node? Is there another intersection I can use and if so, how?

     

    From: Pat Bazeley

    Posted: Wed 6/02/2002

     

    I'm not sure why it is so time consuming to gather all the specific types of dissatisfaction into a general dissatisfaction node?

     

    The fastest way to do this (if they are arranged as children of the dissatisfaction node, which is what one would expect)would be to Collect on the parent node (this is assuming NUD*IST) - you may then want to keep the resulting node as a separate node, or merge it back in with the parent. If they are more scattered (or you are using NVivo), then use a Union to gather them together (in NVivo you simply make a union of the children or the subtree).

     

    One of the ways of dealing with multiple docs for each respondent and still get numbers is to create case nodes (in NUD*IST using the same methods as for making base data nodes is fastest, i.e. make a table with a case column in it; in NVivo this is a little more complex). You can then create a matrix of cases by dissatisfaction, restricted if you want to particular groups to see if the pattern differs for, say, males and females.

     

    The issue of multiple response, which is what has given rise to your problem, is an interesting one. If you are wanting to see what patterns of association there are between certain characteristics or experiences and particular kinds of dissatisfaction, in qualitative terms it doesn't really matter that some people are counted twice - what you're looking at is the pattern and reviewing the associated text to see, perhaps, what is the cause of the dissatisfaction and how it is expressed. If however you want to then analyse those patterns statistically you face the same dilemmas as when using multiple response (say, in SPSS): it is not legitimate to run, say, a chi-squared analysis on the whole table (the cells are meant to be mutually exclusive). You could run it on each specific type of dissatisfaction if you have enough cases - but you probably wouldn't want to bother. Again, if you have enough cases, you can think about correspondence analysis--but that also probably goes beyond what you need for the purposes of your research. So - worth doing to review qualitatively but not statistically unless there is a particular reason for doing so.

  15. From: Brigitte Gemme

    Posted: Sun 17/03/2002

     

    What kind of qualitative research project has 20,000 documents? Does this mean you have 20,000 respondents? This is contrary to most of what I know (and teach) about appropriate use of the method. - Lioness Ayres

     

    I believe this listserv is about technology - which implies methodological issues of course - , not about "proper qualitative research". I know that the designers of NUD*IST did not have in mind (very) large projects when they started doing this. I also know they may not agree, as qualitative researchers themselves, about the uses some people do of their software that maybe became "too powerful", allowing the treatment (alas very slowly) of hundreds of documents. Although this is not the case here, one could theoretically have a 20 000 respondents project using qualitative methods.

     

    There are qualitative researchers who feel that they belong to the realm of qualitative research; there are also researchers who belong to other realms of researchers that use qualitative methods to reach disciplinary objectives that are different from those of hard core qualitative research. Sociologists, psychologists, management scientists, and other fellows from diverse disciplinary backgrounds have found a great source of inspiration in qualitative methods, and some incorporated aspects of the qualitative research process in their own research process. (Some of them also feel that they now belong more to qualitative research than to their disciplinary field, but that's a different question.) If they all pretended to be doing grounded theory, I guess there would be a problem about wishing to import thousands of documents into N4 or N5 or N6.

    However, software designed for computer assisted analysis of qualitative data have found a (vast, I believe) public in researchers who don't necessarily follow the precepts of qualitative research as an emerging disciplinary corpus - because they are members to other disciplinary groups and believe they should stay so for a zillion of reasons - but still find their needs of data analysis met by products such as NUD*IST.

     

    NUD*IST really filled a gap in the toolbox of researchers of many disciplines and is very much appreciated for this reason outside of "pure" qualitative research.

     

    That's why I don't believe that "appropriate qualitative methods" is really an issue here. In my humble opinion, we must be aware that, like it or not, many other researchers are inspired by aspects of qualitative researchers' work and could in return inspire them. This diversity of research options is a cherished aspect of academic freedom: I really dislike the economists' disciplinary frame of research, but hey, I still learn a few things from economists once in a while, including in the methods field. Also, we (sociologists and economists) both use SPSS when in need of a quantitative data analysis pack and we can share this technological tool and information about it (on listservs, for example) to produce really different disciplinary knowledge. Qualitative researchers can certainly share their methods and tools with researchers who don't subscribe to all of the same methodological precepts; they'll certainly learn as much from doing this than I learn about it as a science studies researcher studying educational issues in a sociology department.

     

    Notwithstanding all this, it would also be possible that a large team of qualitative researchers put together their work in one NUD*IST database to find new paths of analysis. This could lead to hundreds of documents in a single database, and to very interesting findings. They would still be doing work that's "appropriate" according to what some of us know (and teach) about qualitative research, but also still in need of a technological solution to handle large projects that, I believe, NUD*IST could offer.

     

    From: Lioness Ayres

    Posted: Sun 17/03/2002

     

    Your points are well taken. I did not mean to suggest that I think using 12,000 (or even 20,000) documents is wrong on its face. On the other hand, qualitative research is based on some assumptions, and one of those assumptions is sampling for depth rather than breadth. When that assumption is violated, I'd prefer to see some excellent rationale for the decision to use qualitative rather than quantitative analytic techniques. Because I am particularly interested in mixed methods research, I would very much like to see the rationale for the study we have been discussing, not to mention the explicit strategies for analysis beyond just the use of software.

     

    But more to the point, tools designed for depth analysis (of which I include NUD.IST, although of course the truly in-depth tool in this family is NVivo) may not perform as well when their assumptions are violated.

     

    I do not wish to suggest that all qualitative research be grounded theory, phenomenology, or some other pure form of method. I do believe, however, that any research enterprise must show evidence of internal analytic consistency - that is, that mixed methods studies must be consistent within approaches, so that statistical analyses are conducted according to relevant underlying assumptions (for example with regards to normality) in the same way that qualitative data are analyzed in accordance with underlying assumptions (which might be symbolic interactionist, constructivist, feminist, or even the much-maligned "content analysis"). Practices such as sampling and data collection should be explicitly linked to analytic strategies, just as they would be in a mono-method study.

     

    Qualitative analysis does have appropriate and inappropriate uses in the same way that the general linear model or chi-square have appropriate and inappropriate uses. The blurring of those boundaries - as opposed to the systematic integration of findings using multiple methods - will undermine the rigor of any research enterprise.

     

    Finally, I disagree completely that "technology" should be assessed separately from its (appropriate or inappropriate) use. I come from nursing and would offer the example of reproductive technology to anyone who disagrees with this assertion. Other examples, from economics to chemistry, abound.

     

    Hope this has clarified my position.

     

    From: gary

    Posted: Sun 17/03/2002

     

    I was a little suprised to read "so that statistical analyses are conducted according to relevant underlying assumptions (for example with regards to normality)" since one of the beauties of nonparametric statistics, of which I am a fan, is the legitimate starting point that these assumptions are not present.

     

    It also seems to me that N=large does not contradict the qualitative impetus for rich depth

     

    From: Lioness Ayres

    Posted: Sun 17/03/2002

     

    Should have said, I suppose, that SOME statistical analyses have assumptions about normality. I figured the example of chi-square would have made it clear that not all statistical analyses use the same assumptions.

     

    I once did a qualitative study that had an N of 65 families in which each family member was interviewed individually twice. We generated 10,000 pages of data. I am comfortable saying that was too much data and limited our ability to get everything out of those data that we could have.

     

    What I am still asking is HOW is this done? HOW do researchers manage to do good qualitative research with such large sample sizes?

  16. From: Lucy Cutler

    Posted: Wed 13/02/2002

     

    I have imported an SPSS database into N4 to use as base data. The data consisted of responses by each of my respondents to a 90 item questionnaire.

    The table seemed to import successfully, but the labels were not carried across from SPSS. I have spent a "considerable" time relabelling in N4 only to now discover that 4 documents were missed due to a typo in the document name. My own fault for not checking the report carefully (n=90).

     

    My dilemma is how to get these 4 documents into N4 and coded as the same base data. When I have tried to reimport the database it adds it on to the previous import as additional nodes. Am I truly faced with deleting the first table, reimporting, and worst of all, relabelling and redefining the 90 questions (each with 4 response types).

     

    From: Pat Bazeley

    Posted: Wed 13/02/2002

     

    If you are concerned about the extra data "corrupting" the base data that is already there because you have modified it, make a table with only the extra data in it and import that. In general, you can import a table as many times as you like (with or without extra data) and it will not affect the coding already done.

     

    This is a bit late for you now, but what you can do to get labels across from SPSS is to go via Excel (or similar). Save your SPSS file as an Excel file (with variable names). Open in Excel. Return to SPSS and, with value labels showing, copy the database. Go to Excel, click in cell A2, and Paste. The labels will paste over the numbers. Then save as text and import.

     

    Incidentally, I wonder if you can use 90 base data codes effectively in N4 - perhaps you might have selected the most relevant variables for inclusion and omitted the others?

  17. From: Kath McPherson

    Posted: Thu 14/11/2002

     

    I am usually a quiet participant on this newsgroup but think the discussion around numbers and percentages is interesting although in some ways misses what I have found valuable in using a qualitative paradigm.

     

    What particularly interests me about the approach (and what rigorous approaches to sampling have seemed vital to) is the increased clarity of meanings it yields rather than prevalence of them. Indeed, it is one reason that looking for alternative perspectives within views expressed can be so interesting and challenging to thinking.

     

    What gives robustness and credibility to qualitative research is, I think, unlikely to be found in augmenting it with statistical analysis, but more likely to be found in ensuring we pose new ideas and challenge old ones in a way that is coherent, clearly expressed and well justified by the data and one's analysis.

     

    From: Kath McPherson

    Posted: Thu 14/11/2002

     

    How interesting - I just sent an apology to the list in case my earlier comments had offended and I see they never actually had left my computer due to a formatting glitch - I must just generally have a guilty consience!

     

    On a separate point, I think I think the exploration of the extent of shared language over descriptive terms is fascinating. My main clinical interest is in traumatic brain injury, and in 1904, a neurosurgeon called Thomas English wrote a paper for the BMJ calling for shared language when he said 'Greater accuracy and discrimination should be observed in the description of these conditions' because Mild / Moderate and Severe injury were used so variably. His view was that progress in improving care for patients was greatly hampered by the lack of

    clarity.

     

    That point is perhaps more interesting than the one I initially sent

     

    From: Lioness Ayres

    Posted: Thu 14/11/2002

     

    I think qualitative and quantitative data have a natural "fit" and that when used together, give us more, and more useful, information than either can alone. On the other hand, I don't think that means they should ALWAYS be used together, or that a qualitative study without a logical link to quantitative data (for example, studies that do NOT lead to hypothesis generation or the identification of important sources of variation) is without merit.

     

    And I'm with you about clarity. Clarity is the best.

  18. From: Achim Schlueter

    Posted: Tue 21/05/2002

     

    We are an international team and we have interviews made in German. Our first idea was to translate them entirely, so that the whole team has access to them. We realise that this is too much work. Therefore we would like to code the material in its original language and translate only the material which is coded.

     

    What is the best way to do so? Here are some ideas I had thought of.

     

    The secretary gets a report about each document, which tells her, which lines she has to translate. This translations are made starting with the line numbers of the original German transcript. The translation is then introduced into Nudist and quick coded.

     

    The secretary gets from time to time a report of each node and has to translate it. The entire translation is coded then under the node. This would require less work in coding, but obviously the translator would loose very much the context, which might be necessary for translation. To go back to the original material is more work.

     

    The translation is directly made into Nudist, into the original transcript, by editing the first line, which was coded and the English translation is typed into the Nudist text editor.

     

    Does anybody has any experience with this problem?

     

    From: Catherine Pilley

    Posted: Tue 21/05/2002

     

    It seems to me it would be less work for everyone if the translator translated the node. If he or she had access to NUDIST then they could jump to context if they were in any doubt about the piece they were translating. By doing this there would be no need to try and locate the original manually which as you say is time consuming. Editing in NUDIST in my experience is very time consuming.

     

    Have you thought about using the translation process as a chance to add cultural context for the English speaking colleagues. I have used this team approach successfully in translating Thai, rather than simply seeing it as an administrative task.

     

    From: Pat Bazeley

    Posted: Tue 21/05/2002

     

    I haven't any direct experience of a translating problem, but can see all kinds of problems with the idea of translating nodes, quite apart from the one of loss of context. If your coding system is working properly, any text that is interesting enough to code is very likely to be coded at several nodes, so the work would be duplicated (or complicated editing from one node to another taking place). Secondly, this would not be a lot of help to those using English because they would not be able to use any of the index searches with the data -- nodes on their own are often not very informative because they are designed to be viewed in relation to other nodes.

     

    It would seem to me that the issue of how much to translate depends on

    1) how much material is quite irrelevant, and

    2) the role in the project of those who don't speak/read German Re (a) If the irrelevant material can be eliminated from a copy of the documents (as prepared for NUD*IST), then translate all that remains (i.e. I wouldn't mess around with line numbering). It's probably faster and more effective than other methods you suggest.

     

    Re (2) Perhaps find other ways to involve them in the project (reviewing initial reports/summaries from the data?). Alternatively, translate just a subset of documents which provide a range of perspectives on what is being studied, for them to get the "flavour" of what is being studied. Depends a bit on what proportion are involved and to what extent they are involved.

  19. From: Hyun-bang Shin

    Posted: Fri 22/03/2002

     

    I am going to start my fieldwork in China soon. What I hear is that NVivo seems to be working with languages other than English, but it is not clear how well it works with Chinese characters. I am attaching QSR's previous response to my query for your information. Could anyone who has experience in analysing Chinese data tell us more about his/her experience?

     

    NVivo was not designed with non-Latin fonts in mind and so we do not strictly support it for this kind of project. However, because of its rich text capability a number of users have reported varying degrees of success in using it with Korean and Chinese fonts (and please understand that most of my discussions have related to Chinese fonts).

     

    Our impression was that NVivo would not handle these fonts as each Chinese character is usually rendered by two text characters on the computer - however, since we have had reports from users that they've been able to import and view Chinese in NVivo's browser, it is possible that there is a single byte equivalent (that is, one Chinese character is represented by a single text character).

     

    From what I understand, Chinese or Korean Windows is likely to be better than English Windows (whether you need both Chinese and Korean or just one of them I do not know).

     

    Also "type C" or "pencil" font for Chinese text is apparently the one to choose.

     

    Getting the text to display in the browser is only the first step - working in NVivo with non-Latin fonts is always a compromise as you'll need workarounds for other parts of the interface (e.g. text search) and some things won't work at all (e.g. printing out browsers with coding stripes displayed). You really need to ascertain whether the areas that you can use in NVivo with the data you wish to analyse are sufficient for your project. Here the demo software, which is fully functional except that it won't save a project, is useful.

     

    From: Yu Mui

    Posted: Fri 22/03/2002

     

    I was told that Nudist worked well with Chinese Windows, but I never tried it since I don't have Chinese Windows. I tried Atlas with Chinese text but it would alter smoe Chinese character and show funny symbols. I would love to hear other people's experiences on using qualitative software for Chinese text.

     

    From: Ted Barrington

    Posted: Fri 22/03/2002

     

    Just thought I'd highlight a distinction between the use of Chinese fonts (or indeed other non-Latin fonts) in NUD*IST and NVivo.

     

    The discussion of the past few days encompasses all I know of the situation as far as NVivo is concerned.

     

    NUD*IST differs in that it uses a system font for its display. It may well be that the right language version of Windows supplies the suitable font. I have heard that later versions of Chinese Windows use an English font as this particular system font and therefore Chinese text does not display in NUD*IST.

     

    For NUD*IST it is easy to determine whether your particular Windows version will display the font you require - download the N6 demo software from our website, install it and then import a Chinese (or other) font file and see if it is displayed correctly in N6's browser.

  20. From: Caroline Webber

    Posted: Thu 25/07/2002

     

    How can I alphabetise the nodes in my tree node list? Whenever I try to rearrange by dragging and dropping, the node being moved either becomes a "child" in another node or else I lose it altogether. I know that each node has its own code number. Is there a way to use the numbers to rearrange? NVivo will automatically alphabetize the case nodes and free nodes.

     

    From: Leonie Daws

    Posted: Thu 25/07/2002

     

    In NVivo 2 there are a couple of ways of doing this. For a temporary look at your nodes in alphabetical order, in the Node Explorer, click on the left panel until the nodes you are interested in appear in the right panel.

     

    Then click on the Title column in the right panel and the Node order will automatically be arrranged alphabetically. Clicking a second time on Title will reverse the order Z-A. Clicking on the No. column wil rearrange the nodes in Number order.

     

    If you want them to appear alphabetically in the left panel you will need to renumber each node so the number order reproduces the alphabetical order. You can do this by selecting the Node, choosing Properties (With a right mouse click or by clicking on the I - Properties button) and changing its address number. You may have to do some fiddling to get the complete list into alphabetical order.

  21. From: Liz Denton

    Posted: Thu 21/02/2002

     

    I'm working in the UK and doing some AR in project management - oil and gas industry. I am writing up and have just submitted my work to my supervisor. I have a sense that she is trying to pull my work back to a conventional research format (general literature review, use of third person, clear starting point with specific aims and objectives).

     

    I feel that if I meet her objections with the changes she is asking for my research will fall between the blocks. I don't think anyone in my school has any experience with AR and I fear that my work will be seen as not really making the mark. Would/could anyone mentor me.

     

    I have two specific questions. 1. I am a participant/leader in this project. When I am looking for evidence of my claims I realise that the evidence comes just from working on the team. In this instance, how can one justify a claim. It is almost as though, if it is in someone else's words it counts, but if it is my observation it doesn't. 2. I write up a journal several times a week. Is it normal to submit the journal .... or are the extracts in the thesis sufficient.

     

    From: Jill M. Humphries

    Posted: Thu 21/02/2002

     

    Qualitative research encompasses a variety of techniques eg. observation,

    participant observation, interview data, archival research, etc. Guba and Lincoln (1985), Denzin (1994), Cresswell (1999, 1994) and Spradley offer useful justifications when to use a qualitative framework as opposed to a quantitative framework. Yin (1994, 1993, 1980) provides a good rationale when to use a case study approach as well. You might want to add a paragraph explaining why using a qualitative framework is more appropriate for the study. You are attempting to understand complex human phenomena by providing a rich description of the process... you are not seeking to generalize to a wider population...

  22. From: Fran Barg

    Posted: Tue 2/07/2002

     

    I am beginning several large projects that will include interviews conducted by multiple research assistants. I am interested in finding out if anyone has had success using voice recognition software to generate transcripts that then can be imported into Nudist?

     

    From: Elliot Richmond

    Posted: Tue 2/07/2002

     

    The short answer is, probably, that what you have in mind won't work.

     

    Voice recognition software must be trained to a particular person's voice.

    Some versions allow for multiple users, but each user must be trained separately and the software will only recognize one user at a time.

     

    Also, using even the best recording equipment available will not produce recordings of sufficient quality that they can be fed directly into the software. Speaking "live" into a high quality microphone is about the only technique that produces reasonable results. Even then, mistakes will occur frequently and it is better to correct these mistakes as they happen.

     

    Some software packages will accept input from a high quality digital recording device, but the software still must be trained to recognize the device as a user.

     

    From: Aleksandra Belofastov

    Posted: Tue 2/07/2002

     

    I use Dragon Naturally Speaking (version 4 - there is a version 6) to transcribe my interviews directly onto the computer. Dragon requires that you train it to the sound of your voice which would mean that each of your research assistants would need to spend time doing this. The transcription into the computer is saved as a wave file (this can take considerable memory) you then set the computer to type out the transcript and save it in plain text. Unless you have a good deal of memory, I would suggest deleting the wave files at the end of this process. I should also mention that version 4 is not perfect; I have had to 'tidy' my interviews following transcription as Dragon doesn't always understand what I am saying!

     

    Version 6 may be better. Needless to say, having attempted both methods of transcription (i.e. directly typing from the audio cassettes and using Dragon) I have most definitely saved considerable time using the latter method.

     

    Importing into NUDIST is relatively easy from this point. If you are using the older version, you need to make sure that the plain text interview looks the way you want it (i.e. paragraph returns in the right places & so on), the newer versions (I am using #5) allows you to edit the interviews even following importation.

     

    From: Shigeko Izumi

    Posted: Tue 2/07/2002

     

    I think the problem of voice recognition program is that it has to be trained. I understand your research assistants spent time for training, but how about the voice of interviewees?

     

    How did you trained the program on the interviewees' voice which is tape recorded ( I guess)?

  23. From: Christine Maheu

    Posted: Fri 22/11/2002

     

    A few months ago, I had started coding an interview. I had only coded half of the interview. Now, it has been months since I've had a chance to come back to this interview. The question is: Once I have started coding an interview is there anyway I can go back to view all my coding for a specific area in the document? And I don't mean by node browsing. My problem is that since it has been months since I have had a chance to look at my coding, I would like to go back and see my line of thought and what brought me to code those area for such nodes. I also would like to see what part of my text has and has not been coded.

     

    From: Pat Bazeley

    Posted: Fri 22/11/2002

     

    Highlight the area of interest in the document, and then press x on your keyboard (or choose eXamine coding from your right mouse menu).

     

    You will be provided with a list of any nodes that appear within the selected area. If you want to review the document as a whole, select the document in the Explorer, and choose to Report - either a summary of coding or text with cross references should give you an overview.

     

    To know where you're up to next time you come back to a long document, leave yourself a message about what text unit you are up to in the description slot for the document. Then you can use l for Locate to go straight to that text unit in the document when you next want to work in it.

  24. From: Lisa Cunningham

    Posted: Wed 20/03/2002

     

    I have some footage stored on a VHS tape and would like to use a small section in my NVIVO report, as a databite, my question is how do I go about this?

     

    From: Elliot Richmond

    Posted: Wed 20/03/2002

     

    You must convert the television signal recorded on the tape into a digital signal that can be saved on your computer. The process is known as video capture. Some Macintosh computers (iMacs?) come with the software and hardware to do this. If you don't have such a computer, then you can either purchase the hardware and software, or have it done. Since this is probably a one-time deal for you, you are probably better off having it done.

     

    Look in the yellow pages for video services, videotape editing services, etc.

×
×
  • Create New...