Saturday, February 28, 2009

Week 8: Ethnographies

Week 8: Ethnographies

Some of the key differences between ethnographies and case studies are the duration of the studies, the researcher’s role in the study, and the triangulation of data in ethnographies. In a good ethnography, the researcher assumes a participatory role in the research and immerses him or herself in the study for several weeks, months, or even years. Case studies, rather, can be observed from the outside and might only entail a brief period of time.

Lauer and Asher give these suggestions for ethnographies:
Things that should be a part of a good ethnography: -minimum of overt intervention by the researcher
-based on extensive observations, researchers generate hypotheses
-researchers validate hypotheses by returning to the data
-thick descriptions—detailed accounts of writing behavior in its rich context
-researchers identify and define the whole environment they plan to study
-triangulation occurs: there is a mapping of the setting; selecting observers and developing a relationship with them; and establishing a long period of investigation.
-validity comes from a continual reciprocity between developing hypotheses
about the nature and patterns of the environment under study and the regrounding of these hypotheses in repeated observation, further interviews, and the search for disconfirming evidence
-researchers must be careful to report the types of roles they played in the study.
Their perspective becomes an important part of the environment studied.
-three kinds of notes should be taken: observational, methodological, and
theoretical
-there must be reliability amongst data coders and the testing of schemas in new
environments
-reporting of conclusions that suggest important variables for further
investigation.
-findings are not generalized beyond subjects of study
-study is replicable and would produce the same or similar variables for
other observers and those variables would remain stable over time

What to look out for/problems to avoid when doing an ethnography:1. data overload
2. tendency to follow first impressions
3. forcing data to confirm hypotheses
4. lack of internal consistency, redundancy, or focusing too much on novel information
5. missing information
6. base-rate proportion—basing impressions on a small population, ignoring total size of population
7. genreralizations

Strangely, though, based on these recommendations, I found myself thinking that of the published ethnographies that we read, only one really closely followed these suggestions, and it was the one I would have thought seemed the most un-academic and useful had I not read Lauer and Asher’s article. To me it seems odd, but the almost creative non-fiction piece by Carolyn Ellis seems to most closely adhere to the ethnography description.

Shattered Lives: Making Sense of September 11th and Its Aftermath – Carolyn Ellis
The first half of this story is very much a creative non-fiction piece, simply immersing the reader in a narrative about personal experience. However, following the recommendations of Lauer and Asher, it appears that this was, in fact, an effective ethnography. The latter half of the article grounds her experiences in theories that have been tested and defined. She provides a thick description of the entire environment—Washing D.D., New York, Charlotte, airports, rest homes, etc. She validates her hypotheses about what she is experiencing by the theories she quotes and she specifically details her role in the ethnography—she is victimized in unique ways and is an airline passenger during the attack, only on another airplane. What she experiences is never generalized, though it is conceivable to think that what she writes could/would be replicated theoretically in others’ experiences as they went through the same moment in similar situations. Ellis has provided us with something useful and replicable, and has thus made for an effective enthnographic study.

The Social Life of an Essay: Standardizing Forces in Writing – Margaret SheehySheehy starts off by explaining her role as researcher in a classroom of Seventh-Graders that will be assigned to design a new school in which they will be asked to attend. She, as Lauer and Asher recommend, explains her relationship with the two instructors. She explains the concern for standardization as described by theorists such as Foucault and mentions that she risks “perpetuating Street’s concern that literacy has been constructed as an autonomous, a-cultural ability to produce and reproduce a seemingly stable set of characteristics that looks like an essay” (336). The study will research the forces of standardization in youths’ essayist writing at school. It was determined in a previous ethnography by Shuman that students were “dependant on the community in which texts were produced” (337), that they were dependent on patters of exchange, notions of audience, and situations of use of texts. To determine the standardization of the writing in this classroom, Sheehy examined the articulations of relations across the 8-week Building Project.

Sheehy asks when doing this research project: “What are the standardizing forces as work here, and what are the ones that stratify?

A description of the environment and participants (students) is given, naming specific demographic information, the number of students, the allotted time for teaching each day. However, there is ambiguity as to what exactly is going on here and what Sheehy is really doing. A more thick description here would be helpful and less description later about what students actually said would be good. Sheehy explains her role and awkward position as researcher in the classroom. Established triangulation occurred as Sheehy mapped the environment, got to know her subjects and observers, and established a long period of investigation. Sheehy codes data by interpreting what students talked about regarding the School Board’s decision to rebuild the school. A great deal of the article is spent describing the student comments in two levels of analysis observed: 1)production, consumption, and distribution, and 2) Centripital and centrifugal forces in writing.

Ultimately, I was left thinking there was strong potential here—Sheehy gives good theoretical background to support what she is attempting to argue. However, when I think in terms of replicating the project, it is difficult to understand what she and the teachers were actually doing in the classroom. Less time spent on specific data and a much more thick description of what was going on and how the interactions between teachers, researchers, and students is necessary. Honestly, I had a difficult time understanding what really was researched.

Writing in an Emerging Organization – Stephen Doheney-FarinaExigency is stated that professional communication students need to learn how complex social contexts can and should affect their writing in the workforce. Author asks: how are writers’ conceptions of rhetorical situations formulated over time? And how do writers’ perceptions of their social and organizational contexts influence the formulation of these conceptions of rhetorical situations? What are the social elements of writers’ composing processes? How do writers perceptions of their organizational contexts influence these processes? How do writing processes shapt the organization structure of and emerging organization?

Hypotheses: 1) rhetorical discourse is situated in time and place and is influence by exigence, audience, purpose, and ethos; 2) the rhetor conceives of these situational factors through interactions with persons, events, and objects that exist external to the rhetor; 3) the researcher attempts to explore human interaction as it is evident in social and cultural settings; 4) a microscopic investigation of important parts of a culture can elicit an understanding of that culture; 5) individuals act on the basis of the meanings that hety attribute to the persons, events, and objects of their worlds; 6) researchers seek diverse interpretations of the acts under study; 7) the researcher is the primary research instrument, and as such he or she must play a dual role.

Data recorded in four ways: field notes, tape-recorded meetings, open-ended interviews, and discourse-based interviews. Status of startup company given, explaining crisis of not generating revenue and the rhetorical decisions that had to be made in the writing of a business plan to be given to investors. The writing of the Plan required agreement upon writing strategies, which caused hierarchies to change and cooperation to happen, thus changing the company and the writing process. Details are given to describe company’s woes and triumphs, primarily related to power struggles.

Theories are described to analyze the findings. Implications for teaching is given.

Overall, the story that is given is interesting and has potential to teach and instruct. However, this “ethnography” doesn’t closely follow Lauer and Asher’s suggestions. First off, little is known about the researcher and his direct involvement during the research process. All that is known is that for several weeks he met with organization members to observe and interview. It is difficult to comprehend his rapport with the company members. What he does do well is validate his hypotheses by returning to his observations. The problem here, though, is that data doesn’t seem to be coded and there is no inter-rater reliability. Also, little is known about the kinds of notes he took—are they observational, methodological, or theoretical? Mostly, they seem methodological. Variables are certainly discovered that could be used for further research—rhetorical writing strategies in Business plans, for one—but Doheney-Farina problematizes his findings by suggesting that what he found should apply to the classroom, thus indirectly generalizing what was found. Finally, it is hard to know if this study is replicable because little was explained about the methodology.

Learning the Trade: A Social Apprenticeship Model for Gaining Writing Expertise--Anne BeaufortResearch questions are given: what defines the social status of texts and, by association, the social status of writers within discourse communities? If writing is a social act, what social roles aid writes in learning new forms of wring? Is there a developmental continuum for writers who are in the process of being socialized into new discourse communities?

Much theory is reported about the social experiences that affect student behavior and learning processes. Description of the site of ethnographic study given: a nonprofit organization in the heart of an urban area. Two newcomers to the organization are named as the subjects that will be interviewed over a year-long process. Data was collected through a series of interviews with several workers, mostly with Pam and Ursula, who claim to spend 50% of their time at work writing. Interviews were audiotaped and all writing was photocopied, including drafts and revisions. Field notes, interview transcripts, and writing samples for patterns and themes in relations to social roles for texts were taken. Triangulation is described: 1)data sources were compared with each other, 2) caparison of difference responses of the informants over time, 3) and solicitation of informants’ response to drafts of the research report.

Findings are given but at first they don’t seem very ground-breaking. It is found that more difficult, rhetorically driven writing (like large government grant proposals) were given to more experienced writers who ranked higher in the organization’s hierarchy, whereas those simpler tasks were given to the less experienced employees. A detailed description of the writing tasks that were performed, primarily by Ursula and Pam, is given. It is shown that there are five areas of context-specific knowledge that the expert writer or older had acquired: discourse community knowledge, subject matter knowledge, genre knowledge, rhetorical knowledge, and task-specific procedural knowledge. Six of the writing roles observed—observerer, reader, clerical assistant, proofreader, grammarian, and document designer—did not require context-specific knowledge.

Overall, theoretical approaches are taken to observe and analyze data. However, the information seems fairly useless and obvious. The main findings seem to indicate that writing responsibilities increase in importance to the organization as local knowledge increases while the employee moves from novice to expert writer in the company. Doesn’t that seem obvious? Most problematic, however, is Beaufort’s attempt to generalize her findings, even saying that “it is likely that most of the elements needed to duplicate the kind of writing apprenticeship that occurred at JRC are readily available in other workplace settings.

Thursday, February 19, 2009

Week 7: Survey and Sampling

Lauer and Asher claim that surveys are useful when a researcher needs to determine something that can be assumed (with high confidence) as representative of a large group. Whereas case studies examine in detail an isolated group in order to name and describe variables for that group, surveys that are “statistically designed…can be considered representative of the entire population” (54). The greatest benefit with surveys, if done correctly, is that a researcher can gain valuable, generalizeable information that can be applied to a large population without the time, effort, and cost of meeting with and interviewing everyone in that population. The question, then, that I would raise from this information that Lauer and Asher present is this: is the difference that separates survey/sampling research from case studies the fact that surveys have the potential to be generalized if the confidence limits are strict enough whereas case studies are never generalizeable?

The selection of subjects for surveys is both important and difficult. Subjects are chosen by random sampling but with a group large enough to represent the population in order to have strict confidence limits. I can imagine this is very difficult. One of the greatest hindrances to selecting an effectively representative subject pool is the fact that researchers must rely heavily on the subjects’ willingness to take the time to fill out a survey. Without incentive—which often includes cost or “extra credit”—few people feel motivated to fill out a survey. Even with incentive it can at times be challenging. This is obvious when we look at Rainey et al’s “Core Competencies” article where they invited 587 people to participate and only 47 initially responded. This poor showing makes the information less generalizable, thus requiring the authors of the article to apologize for their data by stating that “the data can be assumed to be suggestive for the profession, if not representative because of the small, non-probabilistic sample.” Of course, this doesn’t discredit what they learned from their study, and valuable information was still taken. However, this then turns more to a case study of willing managers rather than a sampling that can be representative—which is the purpose of survey and sampling, is it not?

Three types of data can typically be counted: nominal, interval, and rank order. These data can be viewed easily and effectively by using the K data matrix, which aligns subjects with the variables. Nominal data is simply counting things like, as L&A show, the number of comma faults in a composition. Interval data show things like scores that have numerical intervals between them—test grades and the like. Rank order data assigns ranks from 1 to ‘n.’

Friday, February 13, 2009

Week 6: Case Studies

In composition and communication studies we argue and fight over the subjectivity of things like assessment, “good” versus “poor” writing, and the degree effectiveness of rhetoric and persuasion (among, obviously, many other things that we argue about…) Subjectivity creates a challenging quagmire within fairly abstract fields such as English, writing, public speaking, and other forms of human communication. Unlike scientific inquiry, where generalizable answers often reign in importance, humanities-based inquiry must develop understanding through much less generalized information. Hence, our need for case studies and qualitative research. While much of the information gathered from case studies is rarely generalizable, and much too difficult and expensive to acquire in mass quantities, it can nonetheless be very valuable—insightful and influential. Thus, when we approach questions about pedagogy, assessment, and effectiveness of writing, qualitative case studies are not only appropriate, but very necessary. They do, after all, spark a series of questions that continue investigation into difficult and complex questions.

The selection of subjects for such studies should invariably correlate with the research questions at hand. Considering Flower and Hayes’ research on pregnant pauses, it would have made little sense, for example, to only choose novice writers to determine where and how pauses affect the overall quality of writing. Likewise, in Flower, Hayes, and Swarts’ research design, choosing subjects with little or no advanced education or workplace experience would have likely produced much less useful results. One thing that is clear, though, is that when we write up a research report, we must clearly define who our subjects are. Without this descriptive information, it is difficult for the reader to lend credibility and usefulness to the results.

The most complicated—and important—component of qualitative research is defining an effective and reliable method of collecting and coding data. While there is no one correct method—it will change depending on the project—there are myriad of ways to do it incorrectly. Again, it is critical that the methodology be VERY clearly defined and the results elaborated on. The research must be tried and tested or supported by previous studies that have proved effective coding strategies. This was the major problem of Brandt’s research project about literacy and economic change. Because little was mentioned about research methodologies and the coding of variables, her information ultimately feels unsupportive and useless—merely interesting stories. It becomes clear that her findings could prove just the opposite if she merely selected two different people under different socio, political, and economic circumstances to interview.

Generalizations and case studies don’t often find congruence. At least not to broad generalizations over very large populations. That is not to say, however, that what is learned from a case study doesn’t produce valuable results. What is learned from a case study, if it followed strict and effective research methodologies, can be applied and practiced in order to continue learning how to achieve better results. Case studies are important to humanities and social sciences because they present insight into the complexity of the human being that must be studied in order to move forward in our understanding of any of the fields contained within.

Friday, February 6, 2009

Week 5: CITI Training and Internet Research

Week 5: CITI Training and Internet Research

I love the quote posted on the CITI Internet Training site: “The Internet, like any other society, is plagued with the kind of jerks who enjoy the electronic equivalent of writing on other people's walls with spray paint, tearing their mailboxes off, or just sitting in the street blowing their horns.”

Online communication is unique. It is unthreatening, in many ways, for the participant. In many conceivable studies, the participant could be absolutely anonymous to the researcher. This means two especially important things: 1) participants may be much more honest because they feel no threat of their information leaking—this is the “great advantage”; 2) participants may do, just as the above quote suggests, attempt to destroy the researcher’s objectives. There is a unique sense of empowerment on the internet—power to pose as someone else; power to defame and destroy; power to deceive. While these powers are certainly possible in the physical world as well, they are much easier to manage online. Thus, participants may have an uncontrollable urge to exercise this power.

I find it interesting, though, that with internet research an informed consent document can be waived if “study participation presents minimal risk of harm to the subject….” Who is the determinant of this? The CITI training didn’t elaborate, but I would hope that there is a board of reviewers, at the very least, that would help determine whether or not the research poses more than “minimal risk.” So many researchers are so adamant about what they are doing that they can justify just about anything.

The greatest issue appears to be privacy. While in the real world we may be able to generally classify private as “within a home or church,” online—where literally millions of people can view what a person does—could still be considered private. At least, intended to be private. Consensus has not yet arrived in this blurry area.

The issues brought up—researchers and participants alike using deception; underage participants; assessing risk in a complicated technological society; protection of data; and communication between researcher and participant—ultimately, though, like any other form of research, delve into ethical questions. How can we assure that the participant won’t be hurt mentally or physically? The researcher must always review these considerations before attempting to acquire information. And with the internet, they must be aware of technological hurdles and risk of leaking information.