Jump to content
[[Template core/global/global/lkeyWarning is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]


  • Content count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About alexquse

  • Rank
    Advanced Member

Profile Information

  • Location

Recent Profile Visitors

6,139 profile views
  1. alexquse

    What is the "new nvivo"

    I receive advertisements for upgrading to the new nvivo. I bought nvivo 12 for less than 6 months and the upgrade price asked is USD 625... wow! It is very hard to find what is basicaly this "new nvivo" (nvivo 13?), and what are the improvements. Any information about this version?
  2. I am looking in filter of crosstab Query. There are a lot of options but can't find any option to filter when "in the same Set" or "in the same Document".
  3. I have a rather difficult issue to explain. Let's take a survey with two open-ended questions. Both questions were coded using the same coding system: Code(A) and Code(b). I would like to analyse the differences in coding between the two questions for the same person: what the person expresses in question B according to what he/she expressed in question A. In other words, I am trying to create a matrix coding where only the coding concerning the same person appear: it is quite common with 1,2, ...5 cases, but with more than 100 cases it becomes tricky. Formally, if ID is the person's identifier, I want to creatre a matrix where: Code(A) x Code(B) for any ID(A) = ID(B). I thought of creating relationships manually between all cases to link ID(A) and ID(B) , but it's long (no way to creat relationship automatically from a table?) and finally I'm not sure what to do with all these relationships. I thought of creating as many cases as questions-ID. Then in the classification table, I indicate the ID, which allows to retrieve the couples. But after doing that, I am stucked. The workaround I found is to create this table of transformations on a statistical software to detect the most frequent movements from A to B. And then manually create relationships in Nvivo on these most frequent movements. But by doing this, I lose the interactive capabilities of nvivo. And if I get a set of relations between codes ("when there's that in A, there's that in B") but I still don't know how to display only the content where ID(A) = ID(B). Any idea, suggestions? (I'm not sure to be enough clear, both my english and my ideas are fuzzy )
  4. Yes I will try this. I have not begin with my big project, still doing some tests. I have one more question about autocoding. Could this be a way to change the structure of project? I know that the autocoding (existing pattern) is not reliable. But if we have to restructure our project differently, one idea is to re-import a new source and autocode according existing pattern. Does anyone tested? (I'm going to try: I am working on an extract of 500 entries with two open-ended questions).
  5. alexquse

    Italian as content

    up! 😀
  6. Thank you. File is structured with individuals as Set (and Cases, which is straightforward). Then it is possible to do anything I want with node matrice. Another question: I tried the new function that allows to import data directly from a SPSS file. I couldn't find a way to get the open-ended questions on a "document page": there are embeded in a grid which is not a very practical layout for coding. Not a big deal. More bigger problem: the import stops at 100 cases, which makes the function useless: a "quantitative" study with open-ended questions has often a few hundred cases. Or did I miss something?
  7. Dear all, Sorry to ask a question that has been asked many times. One of the issue I have with nvivo is choosing how to structures the source files. Having one file per case has advantages and disadvantages. My experience is that nvivo has trouble when working with many separate sources. Merging the cases in the same file earns in processing speed (nvivo but also human processing). But I cannot access to some functions when all cases are merged in one source file. Nvivo is more powerfull with each case in a separate source document. I would have liked to have your point of view, the pros and the cons of having cases aggregate in one source or each one in separate sources. This project consist open ended questions of a survey. So I have about 3000 individuals. But we can discuss this issue more generally: at which point it is a good practice to separate the sources?
  8. alexquse

    Italian as content

    I'm interested to work with french, german and italian texts. Do you plan to implement italian as text content language? (or a way to implement custom lemmatization dictionnaries)
  9. My question is simple: does nvivo 9 work with windows 10?
  10. I notice an annoying problem with a big project of 1700 items: shifts appear in the coding! At first I thought it was accident, and I corrected them. But I'm beginning to realize that it is almost systematic. I do not understand what occasioned them, but it's pretty scary to see my coding work deteriorate ... Is there a solution? Uploaded with ImageShack.us
  11. alexquse

    Bug in nvivo 9

    I can't reproduce the problem anymore. I 've sent you anyway the errors log files. Thank you for reactivity Alex
  12. alexquse

    Bug in nvivo 9

    I found a bug on Nvivo 9. I had noticed long ago, but since it is special circumstances and not a very important problem, I had forgotten it. It is impossible to copy more than one node in a node of the window "find result". Nvivo closes unexpectedly.
  13. I worked for years with nvivo. My greatest disappointment is the slowness of the software (I've complained many times here). If you have a dataset of a some size, each operation takes several seconds. This does not seem serious, but when you repeat some coding operations 1000 times, it ended up taking time. The problem actually started with version 8. I went from version 8 to version 9, because the advertising said it would improve the speed. But in fact, the improvement may be 20%, which solves nothing since, for example, from 10 seconds to 8 seconds, it is still 8 times too long! Now we are told that version 10 is faster. But what is it really? is there people working with "normal" corpus of qualitative research (eg, 50 interviews is 1000 pages)?
  14. Here is a solution (by converting classification into nodes) http://forums.qsrint...?showtopic=3436
  15. Hi, I have many classifications of sources. So my question is what and how can I do with these classifications? I want to analyse the distributions of node by source attributes, but it seems very laborious. The only easy features I've found is to visualize sources by attributes. Does anyone have a solution to use these classifications. I should mention that I sought for a long time and the only option I've found is to create node. Does anyone know a way to pass from source classification to node classification. Thank you Alex oopss!! sorry for the multiple post : I received an error message and though that nothing was published