Friday, January 30, 2009

Week 4: Measurement

POST – WEEK 4
The question of qualitative vs. quantitative research is an important one. For decades, it seems, quantitative research has ruled the arena in terms of what is considered valid. Especially in terms of fields such as technical communication—where there is a strong polarity between and an equally strong convergence of the sciences and the social sciences—do we need to distinguish, separate, and integrate the two. They are both necessary for truly provocative, accurate, and productive research. My good friend at Utah State University, a PhD student in sociology, two years ago presented at paper and poster symposium where research was evaluated and awards were given. His research was on specific moods of people in varying environments and consisted primarily of qualitative research—evaluating people, interviewing, describing circumstances, observing, and participating in the cultures he studied. His presentation was great, his methodologies sound. The judging process was flawed, and a random judge was assigned to each person. My friend was unfortunate enough to receive a very narrow-minded judge (a chemistry professor) who did not seem to recognize both research methods. The judge was very impressed with the presentation and gave my friend a full 20/20 on four of the five categories (presentation, argument, poster design, relevancy—something like that) but did not understand the fifth category—soundness of research. He asked my friend—“where are the numbers? You MUST quantify your results or they are invalid.” After my friend explained to him that he used mostly qualitative research and that his results weren’t generalizable and countable—they were reliant on the environment and the variables weren’t tampered with—the judge only frowned. He actually had the audacity to say “what is qualitative research?” After a bit of a scuffle, the judge dismissed the presentation and gave my friend a 0/20 on the research, taking him completely out of contention for an award. What he didn’t understand is that there is an extraordinarily important need to understand environments, situations, and individual reactions in many kinds of research. All he could see was numbers and assumed that those numbers would tell him all. Of course, my friend could have quantified some of his findings, but that wouldn’t change the fact that his research was qualitative, that it wasn’t cause/effect, and that it couldn’t be generalized.

Each of the readings gave great overlap and new perspective on this separation of qualitative and quantitative. They also made a clear distinction between reliability and validity, terms I previously couldn’t separate. After reading the articles, I came to the following conclusions: reliability is “a social construction, a collaborative interpretation of data” (L&A 134); refers “to whether the experiment precisely measures a single dimension of human ability” (Goubil-Grambrell 587); and is the “external and internal consistency of measurement” (Williams 23). Validity, on the other hand, is “the degree to which the researcher measures what he claims to measure” (Williams 23); “actually measures what it says it will measure (Goubil-Gambrell 587); and is the “ability to measure whatever it is intended to assess” (L&A 140). In other words, research is reliable if the methods within the experiment are consistent, narrow and focused, and those reading the research can make sense of it. Research is valid if the results are in line with what the researcher claims he found.

Perhaps I’m simplifying this too much, but after reading through this week’s articles, I still found probably to really be about percentages and likelihood. It uses a number system between 0 and 1, would , as it would seem means the percentage of likelihood that something might or might no occur. Significance refers to chance. If what was found in the research happened as a result of coincidence, then it is not statistically significant. However, if it is proved that this is a repeatable, common, or necessarily influenced phenomenon that can be traced, it becomes important, a “right-on!-we-found-something-important!” or, as they say, significant.

2 comments:

  1. Ah Curtis, you can actually make this sound interesting. I'm sorry to hear about that issue regarding your friend's empirical research methodology being called into question (didn't even know what that meant a few hours ago, so I'm growing as a person, and it is 3 AM). From my vast knowledge of qualitative research, I have to assume that the chemistry professor found his reliance upon what appears to be a more natural type of research to be flawed as a methodology. I know we are cautioned to found our work in solid research, to choose the appropriate method for the type of results we wish to garner, and to emphasize the methodology and not just the conclusion – to stay away from just giving results. But, as you stated, his results were sound. I hope he went on to present his findings at another symposium. And, take heart! Even judges in law courts make mistakes when they do not allow certain aspects of cases to be considered, and then later on under appeal find out there is a precedent, and it costs everyone a great deal of heartache and money. So in a way, at least he left with his life.

    ReplyDelete
  2. Thank you for the story about the Chemistry professor, Curtis. My experience is *very* much the same as your friend, except that I'm having to deal with Marketing professionals or Computer Science engineers in the workplace who are responding to my usability research (which is decidedly qualitative/descriptive). Their kneejerk reaction is to dismiss it out of hand or to make causal claims that are unsupportable. Sigh . . . 8(

    BTW, probability is really easier than you're making it. It is merely a measure of the likelihood that something is the result of chance. So p = .05 just means that 5 times our of 100, what you observed is the result of pure chance rather than the variable you wanted to measure.

    ReplyDelete