Victor Vitanza – “’Notes’ Towards Historiographies of Rhetorics”
First assumption: the writing of history is ideological.
Second assumption: there is no escape from ideology.
Third assumption: the most wide-spread form of ideology is that of common sense.
To be ideological is to be semiotic and to be semiotic is to be social.
One of the major questions we have to ask when reading/writing history is “whose ideology is it?” And it is more complicated than just looking further than at who is more “sophisticated.”
“Historians use rhetoric to as a means of presenting to the reading public the Truth of The Archives” (68). Rhetoric eloquently presents historical wisdom. Historical facts MUST be considered, else history becomes a fantasy, or historical romance. But facts, of course, are only true by virtue of interpretive frameworks—they are themselves interpretations. History can be considered fiction, though Marxists would consider that “hedonism.”
Communally-agreed upon propaganda can “impoverish a communal dialectic.” And a “communal dialectic” can propagandistically exclude others, especially those who are not acceptable members of a discourse community. In other words, there is a big problem with saying that the majority rules when determining historical fact. Groups cannot rely on univocal, homological, propagandistic understandings of histories. They must favor heterological ones.
History must be determined not by asking a series of questions through dialectical steps, but interrogating through a system of always open discourse, allowing all (and this means ALL) to participate in the contribution to the construction of a history.
Three kinds of historiographies: Traditional (time); Revisionary (disclosure); and Sub/versive. The goal with this third kind is to destroy the categorizations, possibly calling it a hysteriography—allegories of hysterias, or sophistic parodies. Instead of arboressence (the branching logic of trees), it favors the middleness.
Traditional Historiography: Follows a pattern of beginning, middle, and end. The first kind centers around time—narrative events and periodizations. The second does not consider time. Most histories of rhetoric fall into the first category. Often the problem with this is historians look at artifacts and documents, even on site, but fail to mention the methodology for interpreting the artifacts. Simple description without method fails. Archives are NOT self-evident.
Revisionary Historiography: Often an extension of traditional historiography or a transitional stage on a “theoretical continuum.” A hermeneutical understanding and ideological self-awareness of History Writing. This kind of historiography exposed previously undisclosed facts. It is sometimes considered a correction or revision of history. An understanding is exposed that suggests archives are subject to ideological distortion.
Sub/Versive Historiography: Sub/versive is concerned with pedagogical politics and the teaching of history, claiming that teaching is, by definition, fascist. Sub/Versive seeks to attenuate, or ease and weaken, fascism “through [possibly] a series of individual cells of critical authority that remain non-aligned and…defused” or “through an extended radical pluralism” (108). Sub/versive seeks anit-nomianism, non-alignment, and nondisciplinarity. Sub/versive historiographies consist of six main ideas: 1) its purposes are anti-Platonic and are parodic, farcical, and directed against reality; 2) there is a dissociation of identity and opposes history given as continuity or representative of tradition; 3) it opposes history as knowledge; 4) reestablishes the groundworks for rewriting histories in the form of “an expressive, literary rhetoric”; 5) writes us out of a philoshopical, fascistic-paranoid, arboressent vocabulary; 6) constructs mis/representative anecdotes—always allegorize, always hysterize.
Edward Corbett – “What Classical Rhetoric Has to Offer the Teacher and the Student of Business and Professional Writing
Corbett attempts to argue that classical rhetoric has played a significant role in the way we produce technical and business writing documents. This argument is made by referring to Aristotle, Francis Bacon, and many other “ancient, dead” rhetoricians. Corbett also notes many more contemporary authors who have historically made similar claims. Aristotle is primarily discussed as the classic rhetorician and Corbett makes his claims about the way we address audience, emotion, ethos, and other rhetorical devices in business and technical writing are because of what ol’ Aristotle—and Cicero and Quintilian—taught us. Because this article does not attempt to revise or add to historical understanding, it clearly would fall under the “traditional historiography” method. It suffers from what Vitanza mentions as a problem with traditional—it looks at artifacts but never outlines the methodologies for interpreting those artifacts.
Tharon Howard – “Who ‘Owns’ Electronic Texts?”A description, with modern scenarios, is given about some of the copyright infringement issues we face in the workplace with electronic texts such as email and hypertexts. A history is given of intellectual property, primarily relying on “facts” that have been archived as historical “knowledge.” Several quotes about the various time periods being discussed are given, perhaps revising the way in which we can interpret this history, based on new perspectives amalgamated in this article. For the most part, though, this is a traditional historiography, focused around narrative events and periodization. Again, as with Corbett, there is little in terms of methodologies for interpreting the historical events.
James P. Zappen – "Francis Bacon and the Historiography of Scientific Rhetoric"
Zappen announces he will “review three twentieth-century interpretations of Bacon’s science and his rhetoric” and then present his own interpretation of Bacon’s rhetoric. At the end of his introduction, Zappen says that he recognizes that these are four ideologies, and never really dismisses the other three. This suggests early on that he is conducting a revisionary historiography where he believes archives are subject to distortion. Zappen convincingly compares the three ideologies of 20th century thinkers, showing how each interpreted Bacon’s conceptions of positivistic science, institutionalized science, democratic science, and imaginative and plain styles. He concludes by suggesting that his interpretation offers something new to our historical discussion of rhetoric; by not denouncing the previous historical accounts, he obviously is attempting a revisionist historiography.
Discussion Questions
Vitanza’s perspective on historiographies is very compelling and persuasive. However, his argument for a somewhat abstract, subversive reconstruction of history—while it makes perfect sense theoretically—has a hard time finding place in an article such as Howard’s. While I could certainly agree with Vitanza’s argument that traditional historiographies fail to address the interpretation of archives, I have to ask—at what point to do we need to methodologically describe our historical interpretations? Certainly if we are to critique a historical perspective, it may be necessary. But I find it hard to know when farce and parody, expressive, literary rhetoric finds place in a document that is simply trying to establish a common historical ground upon which a reader can move forward from to understand the present argument. In other words, when the history presented isn’t part of the primary argument being made.
Thursday, April 2, 2009
Thursday, March 26, 2009
Week 12: True/Quasi Experiments
Lauer and Asher – True Experiments
-experimental research actively supplies treatments or condition such as sentence-combining, planning strategies, or word processing to determine cause-and-effect relationships between treatments and later behavior.
True Experiments:
-actively intervenes systematically to change or withhold or apply a treatment to determine what its effect is on criterion variables.
-uses randomization by which subjects are allocated to treatment and control groups.
-give hypotheses in the form of questions or positive statements
-require a null hypothesis
-must have a randomized selection of subjects. There can be no expected differences among the subjects on any variable.
-have criterion variables to compare the control group and the treatment group with
Quasi-Experiments:
-DO NOT randomize subjects because of restrictions and use already established, intact groups
-have at least one pretest or prior set of observations on the subjects in order to determine whether the groups are initially equal or unequal on specific variables tested in the pretest
-have research design hypotheses that account for ineffective treatments and threats to internal validity.
-are either strong or weak. Strong have proven equality in pretests that determine the two groups are very similar, despite not being randomized. Weak quasi-experiments are unequal in at least one of the variables.
-must show whether or not groups are initially equal or unequal
-still must show interrater reliability and stability of variables
-risk internal validity because of the lack of randomization. However, it is better to leave in outlier subjects than to remove them from the study. The results are more generalizable this way, even if the study is a weak quasi-experiment.
John M. Carroll, et al.: “The Minimal Manual”
Detailed account is given of the Minimal Manual strategy that was developed prior to this research study. A manual was designed to be simple, stripped of verbiage, and user-centered. It was task-oriented, not jargon and information loaded. Description is given about the errors that users made and the adjustments that were made to the manual. However, no description of subject selection is given. It is impossible to tell if the subjects were randomized. No hypotheses are given.
Two experiments were conducted after this preliminary test. In the first, 19 subjects were observed, 9 using a tradition user’s manual, 10 in the Minimal Manual group. Subjects are not randomized, thus, at best, this can only be a quasi-experiment. However, the subjects they used were not already intact groups, thus there is a poor design from the beginning. Interrater reliability is not given and criterion variables are unclear. This experiment is poorly designed, or, at least, poorly described.
The second experiment has four groups that either Learn by Doing (LBD) or Learn by the Book (LBB) and test the Minimal Manual against the traditional manual. Each group, though ,was given different tasks. This automatically makes for a problematic experiment. Also, 32 subjects were tested, slightly smaller than the suggested 40 (four times the number of variables being tested), which hurts the design. Subjects are certainly not randomized, nor did they come from intact groups. Data collection is vague and it is hard to determine interrater reliability.
Elaine M. Notarantonio & Jerry L. Cohen – “The Effects of Open and Dominant Communication Styles on Perceptions of the Sales Interaction”
A history is given of the communication studies as they relate to business sales. This is provided in order to show that variables come from previous studies and are to be further examined in this experiment. A hypothesis is given that says a salesperson can manipulate his or her style to provide for maximum sales effectiveness. An explanation is given between Open and Dominant salespersons—the Open person is gregarious, frank, conversational, unsecretive, and shares personal information. The Dominant is competitive, confident, enthusiastic, and forceful.
The subjects were business students, but they were not randomized, thus making this a quasi-experiment. A pretest is given to determine if there is reliability in the test. 10 people were given a questionnaire of sorts that asks them measure openness and dominance. Results matched hypotheses. Subjects were put into 4 different conditions, run with 6 – 8 people at a time. They are never shown to be equal or unequal groups, making this, likely, a weak quasi-experiment. Six specific variables are looked at—1) general attitude toward salespeople, 2) perceptions of the product being sold, 3) interaction between the salesperson and the customer in the tape, 4) general buying behavior of the respondent, 5) probablility of purchase of the product in the tape, and 6) perceptions of the sales person being depicted in the tape. Because there are over 80 respondents and only 6 variables, this meets the ratio Lauer and Asher are looking for. Overall, this appears to be a somewhat effective weak quasi-experiment.
Barry Kroll – “Explaining How to Play a Game: The Development of Informative Writing Skills”
A literature review explains the variables that have been discovered in regards to children explaining how to play commonplace games. Four (4) variables are explained: 1) the amount of relevant information included in the instructions, 2) the kinds of rules that kids of various ages could explain well, 3) the extent to which students provided orienting details, and 4) the degree to which writers used elements of an abstract and formal approach when explaining the game. 133 participants were used, thus meeting L&A’s variable to participant ratio. Subjects were NOT randomly selected, thus making this, at best, a quasi-experiment. They came from English classes, determined to be “normal” students, were native English speakers, and understood the basic rules of the game they needed to explain.
There are 10 elements to the game that need to be explained. Researchers made each student take a test about the game they explained in order to determine that they understood it in the first place. Only those who scored 90% or better were allowed to be counted in the research. This eliminated 16 participants. The testing of the multiple choice test is suspect. The researchers had college seniors answer a multiple choice test about the rules of the game, but they had never heard of the game. When these seniors only scored 13% on the test, it was assumed that the test was valid.
-experimental research actively supplies treatments or condition such as sentence-combining, planning strategies, or word processing to determine cause-and-effect relationships between treatments and later behavior.
True Experiments:
-actively intervenes systematically to change or withhold or apply a treatment to determine what its effect is on criterion variables.
-uses randomization by which subjects are allocated to treatment and control groups.
-give hypotheses in the form of questions or positive statements
-require a null hypothesis
-must have a randomized selection of subjects. There can be no expected differences among the subjects on any variable.
-have criterion variables to compare the control group and the treatment group with
Quasi-Experiments:
-DO NOT randomize subjects because of restrictions and use already established, intact groups
-have at least one pretest or prior set of observations on the subjects in order to determine whether the groups are initially equal or unequal on specific variables tested in the pretest
-have research design hypotheses that account for ineffective treatments and threats to internal validity.
-are either strong or weak. Strong have proven equality in pretests that determine the two groups are very similar, despite not being randomized. Weak quasi-experiments are unequal in at least one of the variables.
-must show whether or not groups are initially equal or unequal
-still must show interrater reliability and stability of variables
-risk internal validity because of the lack of randomization. However, it is better to leave in outlier subjects than to remove them from the study. The results are more generalizable this way, even if the study is a weak quasi-experiment.
John M. Carroll, et al.: “The Minimal Manual”
Detailed account is given of the Minimal Manual strategy that was developed prior to this research study. A manual was designed to be simple, stripped of verbiage, and user-centered. It was task-oriented, not jargon and information loaded. Description is given about the errors that users made and the adjustments that were made to the manual. However, no description of subject selection is given. It is impossible to tell if the subjects were randomized. No hypotheses are given.
Two experiments were conducted after this preliminary test. In the first, 19 subjects were observed, 9 using a tradition user’s manual, 10 in the Minimal Manual group. Subjects are not randomized, thus, at best, this can only be a quasi-experiment. However, the subjects they used were not already intact groups, thus there is a poor design from the beginning. Interrater reliability is not given and criterion variables are unclear. This experiment is poorly designed, or, at least, poorly described.
The second experiment has four groups that either Learn by Doing (LBD) or Learn by the Book (LBB) and test the Minimal Manual against the traditional manual. Each group, though ,was given different tasks. This automatically makes for a problematic experiment. Also, 32 subjects were tested, slightly smaller than the suggested 40 (four times the number of variables being tested), which hurts the design. Subjects are certainly not randomized, nor did they come from intact groups. Data collection is vague and it is hard to determine interrater reliability.
Elaine M. Notarantonio & Jerry L. Cohen – “The Effects of Open and Dominant Communication Styles on Perceptions of the Sales Interaction”
A history is given of the communication studies as they relate to business sales. This is provided in order to show that variables come from previous studies and are to be further examined in this experiment. A hypothesis is given that says a salesperson can manipulate his or her style to provide for maximum sales effectiveness. An explanation is given between Open and Dominant salespersons—the Open person is gregarious, frank, conversational, unsecretive, and shares personal information. The Dominant is competitive, confident, enthusiastic, and forceful.
The subjects were business students, but they were not randomized, thus making this a quasi-experiment. A pretest is given to determine if there is reliability in the test. 10 people were given a questionnaire of sorts that asks them measure openness and dominance. Results matched hypotheses. Subjects were put into 4 different conditions, run with 6 – 8 people at a time. They are never shown to be equal or unequal groups, making this, likely, a weak quasi-experiment. Six specific variables are looked at—1) general attitude toward salespeople, 2) perceptions of the product being sold, 3) interaction between the salesperson and the customer in the tape, 4) general buying behavior of the respondent, 5) probablility of purchase of the product in the tape, and 6) perceptions of the sales person being depicted in the tape. Because there are over 80 respondents and only 6 variables, this meets the ratio Lauer and Asher are looking for. Overall, this appears to be a somewhat effective weak quasi-experiment.
Barry Kroll – “Explaining How to Play a Game: The Development of Informative Writing Skills”
A literature review explains the variables that have been discovered in regards to children explaining how to play commonplace games. Four (4) variables are explained: 1) the amount of relevant information included in the instructions, 2) the kinds of rules that kids of various ages could explain well, 3) the extent to which students provided orienting details, and 4) the degree to which writers used elements of an abstract and formal approach when explaining the game. 133 participants were used, thus meeting L&A’s variable to participant ratio. Subjects were NOT randomly selected, thus making this, at best, a quasi-experiment. They came from English classes, determined to be “normal” students, were native English speakers, and understood the basic rules of the game they needed to explain.
There are 10 elements to the game that need to be explained. Researchers made each student take a test about the game they explained in order to determine that they understood it in the first place. Only those who scored 90% or better were allowed to be counted in the research. This eliminated 16 participants. The testing of the multiple choice test is suspect. The researchers had college seniors answer a multiple choice test about the rules of the game, but they had never heard of the game. When these seniors only scored 13% on the test, it was assumed that the test was valid.
Week 9: Quantitative Descriptive Studies
Lauer and Asher: Quantitative Descriptive Studies
-Go beyond ethnographies and case studies to further define variables, quantify them (either roughly or accurately) and interrelate them.
-Correlate variables by various statistical means to look for strong, weak, or no existing relationships
-Report statistical analyses on variables
-Are descriptive, not experimental research, because no control groups are created and no treatments are given.
-Have larger number of subjects required than for case studies and ethnographies—this is because variables will be quantified. At least 10 times as many subjects as variables.
-Use subjects selected based on their appropriateness to the variables and their availability
-Divide variables into independent and dependent variables
-Have alternative hypotheses to test the variable against
Brenton Faber: “Popularizing Nanoscience: The Public Rhetoric of Nanotechnology, 1986 – 1999.”
Begins with literature review to give grounding to social construction of arguments and a scientist’s ability to persuade and lend credibility to his work through socially constructed rhetoric. He then shows how popular media is affecting the way the general public and scientists alike perceive science. Hypotheses are given that 1) the introduction of nanoscience would be a social and rhetorical process, that 2) the introduction of nanoscience would create a persona—a presentation of the author in the text—and insertthe work within an existing understanding of science, and that 3) the public image of nanoscale science and technology would emerge transitionally.
History of nanotechnology is given. Faber shows how the media has used terms such as “Buckyballs” and a discussion of cryogenics, cures for cancer, self-repairing highways, bulletproof clothing as thin as a jacket, and affordable energy have “popularized” nanotechnology.
Subject selection is described (though labeled ‘data collection’ in the article). Articles were chosen from a library database and were limited to newspapers, general interest magazines, and popularized scientific publications. Data was analyzed based on theme and rheme and article topic. While this was thorough, there appears to be a noticeable interrater reliability issue here. Faber is the only one determining the “39 representations of nanotechnology in 262 occurrences” and he alone determines the social-rhetorical nature of these articles.
Discussion: I am left wondering what variables were precisely being examined in this study. Other than showing us that nanotechnology is being presented in popular media, I don’t understand what else Faber gives us here. What I would like to know is the how this rhetorical phenomenon within the field affects the way the field is perceived. This is never really decided and what little is given is based on Faber’s opinion. Faber needs to work through his subject selection and determine why they are important and how they relate to the variables that he wishes to explore.
Steven Golen: “A Factor Analysis of Barriers to Effective Listening”
Golen begins by explaining some of the listening barriers that were discovered in Watson and Smetlzer’s study. He suggests that these barriers (the variables) need further study and he claims to factor analyze them in more detail. He also increased the number of barriers from 14 to 25. Other studies are mentioned where barriers are discovered but Golen claims that none of these studies identifies factors or dimensions of the barriers.
Golen makes a very presumptuous statement when he says: “only one instructor taught the class; therefore, all the students received the same instruction.” Subjects were chosen from business communication lectures. Because the purpose of the study was to investigate listening barriers amongst business college students, this seems like a good subject selection. 25 barriers were examined, 279 subjects were chosen. This meets Lauer and Asher’s variable: subjects ratio requirement.
Literature review is briefly given, letting us know where the 25 variables came from—the most frequent barriers occurring in several studies.
The data analysis is a bit confusing to me. I’m unsure what the “loan on a factor” refers to. I believe Golen is trying to say that in order to make managing the data more efficient (easy), worthless variables needed to be eliminated, so they got rid of several after the fact. I just don’t understand how he came to that conclusion. It is found that gender influences two of the six independent variables (barriers) and thus instruction may need to be adjusted to meet that need. Overall, it appears that this meets Lauer and Asher’s requirements, but I believe more description about the instruction students are given about listening is important. It is hard to determine where subjects are at when going into the study because of this.
-Go beyond ethnographies and case studies to further define variables, quantify them (either roughly or accurately) and interrelate them.
-Correlate variables by various statistical means to look for strong, weak, or no existing relationships
-Report statistical analyses on variables
-Are descriptive, not experimental research, because no control groups are created and no treatments are given.
-Have larger number of subjects required than for case studies and ethnographies—this is because variables will be quantified. At least 10 times as many subjects as variables.
-Use subjects selected based on their appropriateness to the variables and their availability
-Divide variables into independent and dependent variables
-Have alternative hypotheses to test the variable against
Brenton Faber: “Popularizing Nanoscience: The Public Rhetoric of Nanotechnology, 1986 – 1999.”
Begins with literature review to give grounding to social construction of arguments and a scientist’s ability to persuade and lend credibility to his work through socially constructed rhetoric. He then shows how popular media is affecting the way the general public and scientists alike perceive science. Hypotheses are given that 1) the introduction of nanoscience would be a social and rhetorical process, that 2) the introduction of nanoscience would create a persona—a presentation of the author in the text—and insertthe work within an existing understanding of science, and that 3) the public image of nanoscale science and technology would emerge transitionally.
History of nanotechnology is given. Faber shows how the media has used terms such as “Buckyballs” and a discussion of cryogenics, cures for cancer, self-repairing highways, bulletproof clothing as thin as a jacket, and affordable energy have “popularized” nanotechnology.
Subject selection is described (though labeled ‘data collection’ in the article). Articles were chosen from a library database and were limited to newspapers, general interest magazines, and popularized scientific publications. Data was analyzed based on theme and rheme and article topic. While this was thorough, there appears to be a noticeable interrater reliability issue here. Faber is the only one determining the “39 representations of nanotechnology in 262 occurrences” and he alone determines the social-rhetorical nature of these articles.
Discussion: I am left wondering what variables were precisely being examined in this study. Other than showing us that nanotechnology is being presented in popular media, I don’t understand what else Faber gives us here. What I would like to know is the how this rhetorical phenomenon within the field affects the way the field is perceived. This is never really decided and what little is given is based on Faber’s opinion. Faber needs to work through his subject selection and determine why they are important and how they relate to the variables that he wishes to explore.
Steven Golen: “A Factor Analysis of Barriers to Effective Listening”
Golen begins by explaining some of the listening barriers that were discovered in Watson and Smetlzer’s study. He suggests that these barriers (the variables) need further study and he claims to factor analyze them in more detail. He also increased the number of barriers from 14 to 25. Other studies are mentioned where barriers are discovered but Golen claims that none of these studies identifies factors or dimensions of the barriers.
Golen makes a very presumptuous statement when he says: “only one instructor taught the class; therefore, all the students received the same instruction.” Subjects were chosen from business communication lectures. Because the purpose of the study was to investigate listening barriers amongst business college students, this seems like a good subject selection. 25 barriers were examined, 279 subjects were chosen. This meets Lauer and Asher’s variable: subjects ratio requirement.
Literature review is briefly given, letting us know where the 25 variables came from—the most frequent barriers occurring in several studies.
The data analysis is a bit confusing to me. I’m unsure what the “loan on a factor” refers to. I believe Golen is trying to say that in order to make managing the data more efficient (easy), worthless variables needed to be eliminated, so they got rid of several after the fact. I just don’t understand how he came to that conclusion. It is found that gender influences two of the six independent variables (barriers) and thus instruction may need to be adjusted to meet that need. Overall, it appears that this meets Lauer and Asher’s requirements, but I believe more description about the instruction students are given about listening is important. It is hard to determine where subjects are at when going into the study because of this.
Saturday, February 28, 2009
Week 8: Ethnographies
Week 8: Ethnographies
Some of the key differences between ethnographies and case studies are the duration of the studies, the researcher’s role in the study, and the triangulation of data in ethnographies. In a good ethnography, the researcher assumes a participatory role in the research and immerses him or herself in the study for several weeks, months, or even years. Case studies, rather, can be observed from the outside and might only entail a brief period of time.
Lauer and Asher give these suggestions for ethnographies:
Things that should be a part of a good ethnography: -minimum of overt intervention by the researcher
-based on extensive observations, researchers generate hypotheses
-researchers validate hypotheses by returning to the data
-thick descriptions—detailed accounts of writing behavior in its rich context
-researchers identify and define the whole environment they plan to study
-triangulation occurs: there is a mapping of the setting; selecting observers and developing a relationship with them; and establishing a long period of investigation.
-validity comes from a continual reciprocity between developing hypotheses
about the nature and patterns of the environment under study and the regrounding of these hypotheses in repeated observation, further interviews, and the search for disconfirming evidence
-researchers must be careful to report the types of roles they played in the study.
Their perspective becomes an important part of the environment studied.
-three kinds of notes should be taken: observational, methodological, and
theoretical
-there must be reliability amongst data coders and the testing of schemas in new
environments
-reporting of conclusions that suggest important variables for further
investigation.
-findings are not generalized beyond subjects of study
-study is replicable and would produce the same or similar variables for
other observers and those variables would remain stable over time
What to look out for/problems to avoid when doing an ethnography:1. data overload
2. tendency to follow first impressions
3. forcing data to confirm hypotheses
4. lack of internal consistency, redundancy, or focusing too much on novel information
5. missing information
6. base-rate proportion—basing impressions on a small population, ignoring total size of population
7. genreralizations
Strangely, though, based on these recommendations, I found myself thinking that of the published ethnographies that we read, only one really closely followed these suggestions, and it was the one I would have thought seemed the most un-academic and useful had I not read Lauer and Asher’s article. To me it seems odd, but the almost creative non-fiction piece by Carolyn Ellis seems to most closely adhere to the ethnography description.
Shattered Lives: Making Sense of September 11th and Its Aftermath – Carolyn Ellis
The first half of this story is very much a creative non-fiction piece, simply immersing the reader in a narrative about personal experience. However, following the recommendations of Lauer and Asher, it appears that this was, in fact, an effective ethnography. The latter half of the article grounds her experiences in theories that have been tested and defined. She provides a thick description of the entire environment—Washing D.D., New York, Charlotte, airports, rest homes, etc. She validates her hypotheses about what she is experiencing by the theories she quotes and she specifically details her role in the ethnography—she is victimized in unique ways and is an airline passenger during the attack, only on another airplane. What she experiences is never generalized, though it is conceivable to think that what she writes could/would be replicated theoretically in others’ experiences as they went through the same moment in similar situations. Ellis has provided us with something useful and replicable, and has thus made for an effective enthnographic study.
The Social Life of an Essay: Standardizing Forces in Writing – Margaret SheehySheehy starts off by explaining her role as researcher in a classroom of Seventh-Graders that will be assigned to design a new school in which they will be asked to attend. She, as Lauer and Asher recommend, explains her relationship with the two instructors. She explains the concern for standardization as described by theorists such as Foucault and mentions that she risks “perpetuating Street’s concern that literacy has been constructed as an autonomous, a-cultural ability to produce and reproduce a seemingly stable set of characteristics that looks like an essay” (336). The study will research the forces of standardization in youths’ essayist writing at school. It was determined in a previous ethnography by Shuman that students were “dependant on the community in which texts were produced” (337), that they were dependent on patters of exchange, notions of audience, and situations of use of texts. To determine the standardization of the writing in this classroom, Sheehy examined the articulations of relations across the 8-week Building Project.
Sheehy asks when doing this research project: “What are the standardizing forces as work here, and what are the ones that stratify?
A description of the environment and participants (students) is given, naming specific demographic information, the number of students, the allotted time for teaching each day. However, there is ambiguity as to what exactly is going on here and what Sheehy is really doing. A more thick description here would be helpful and less description later about what students actually said would be good. Sheehy explains her role and awkward position as researcher in the classroom. Established triangulation occurred as Sheehy mapped the environment, got to know her subjects and observers, and established a long period of investigation. Sheehy codes data by interpreting what students talked about regarding the School Board’s decision to rebuild the school. A great deal of the article is spent describing the student comments in two levels of analysis observed: 1)production, consumption, and distribution, and 2) Centripital and centrifugal forces in writing.
Ultimately, I was left thinking there was strong potential here—Sheehy gives good theoretical background to support what she is attempting to argue. However, when I think in terms of replicating the project, it is difficult to understand what she and the teachers were actually doing in the classroom. Less time spent on specific data and a much more thick description of what was going on and how the interactions between teachers, researchers, and students is necessary. Honestly, I had a difficult time understanding what really was researched.
Writing in an Emerging Organization – Stephen Doheney-FarinaExigency is stated that professional communication students need to learn how complex social contexts can and should affect their writing in the workforce. Author asks: how are writers’ conceptions of rhetorical situations formulated over time? And how do writers’ perceptions of their social and organizational contexts influence the formulation of these conceptions of rhetorical situations? What are the social elements of writers’ composing processes? How do writers perceptions of their organizational contexts influence these processes? How do writing processes shapt the organization structure of and emerging organization?
Hypotheses: 1) rhetorical discourse is situated in time and place and is influence by exigence, audience, purpose, and ethos; 2) the rhetor conceives of these situational factors through interactions with persons, events, and objects that exist external to the rhetor; 3) the researcher attempts to explore human interaction as it is evident in social and cultural settings; 4) a microscopic investigation of important parts of a culture can elicit an understanding of that culture; 5) individuals act on the basis of the meanings that hety attribute to the persons, events, and objects of their worlds; 6) researchers seek diverse interpretations of the acts under study; 7) the researcher is the primary research instrument, and as such he or she must play a dual role.
Data recorded in four ways: field notes, tape-recorded meetings, open-ended interviews, and discourse-based interviews. Status of startup company given, explaining crisis of not generating revenue and the rhetorical decisions that had to be made in the writing of a business plan to be given to investors. The writing of the Plan required agreement upon writing strategies, which caused hierarchies to change and cooperation to happen, thus changing the company and the writing process. Details are given to describe company’s woes and triumphs, primarily related to power struggles.
Theories are described to analyze the findings. Implications for teaching is given.
Overall, the story that is given is interesting and has potential to teach and instruct. However, this “ethnography” doesn’t closely follow Lauer and Asher’s suggestions. First off, little is known about the researcher and his direct involvement during the research process. All that is known is that for several weeks he met with organization members to observe and interview. It is difficult to comprehend his rapport with the company members. What he does do well is validate his hypotheses by returning to his observations. The problem here, though, is that data doesn’t seem to be coded and there is no inter-rater reliability. Also, little is known about the kinds of notes he took—are they observational, methodological, or theoretical? Mostly, they seem methodological. Variables are certainly discovered that could be used for further research—rhetorical writing strategies in Business plans, for one—but Doheney-Farina problematizes his findings by suggesting that what he found should apply to the classroom, thus indirectly generalizing what was found. Finally, it is hard to know if this study is replicable because little was explained about the methodology.
Learning the Trade: A Social Apprenticeship Model for Gaining Writing Expertise--Anne BeaufortResearch questions are given: what defines the social status of texts and, by association, the social status of writers within discourse communities? If writing is a social act, what social roles aid writes in learning new forms of wring? Is there a developmental continuum for writers who are in the process of being socialized into new discourse communities?
Much theory is reported about the social experiences that affect student behavior and learning processes. Description of the site of ethnographic study given: a nonprofit organization in the heart of an urban area. Two newcomers to the organization are named as the subjects that will be interviewed over a year-long process. Data was collected through a series of interviews with several workers, mostly with Pam and Ursula, who claim to spend 50% of their time at work writing. Interviews were audiotaped and all writing was photocopied, including drafts and revisions. Field notes, interview transcripts, and writing samples for patterns and themes in relations to social roles for texts were taken. Triangulation is described: 1)data sources were compared with each other, 2) caparison of difference responses of the informants over time, 3) and solicitation of informants’ response to drafts of the research report.
Findings are given but at first they don’t seem very ground-breaking. It is found that more difficult, rhetorically driven writing (like large government grant proposals) were given to more experienced writers who ranked higher in the organization’s hierarchy, whereas those simpler tasks were given to the less experienced employees. A detailed description of the writing tasks that were performed, primarily by Ursula and Pam, is given. It is shown that there are five areas of context-specific knowledge that the expert writer or older had acquired: discourse community knowledge, subject matter knowledge, genre knowledge, rhetorical knowledge, and task-specific procedural knowledge. Six of the writing roles observed—observerer, reader, clerical assistant, proofreader, grammarian, and document designer—did not require context-specific knowledge.
Overall, theoretical approaches are taken to observe and analyze data. However, the information seems fairly useless and obvious. The main findings seem to indicate that writing responsibilities increase in importance to the organization as local knowledge increases while the employee moves from novice to expert writer in the company. Doesn’t that seem obvious? Most problematic, however, is Beaufort’s attempt to generalize her findings, even saying that “it is likely that most of the elements needed to duplicate the kind of writing apprenticeship that occurred at JRC are readily available in other workplace settings.
Some of the key differences between ethnographies and case studies are the duration of the studies, the researcher’s role in the study, and the triangulation of data in ethnographies. In a good ethnography, the researcher assumes a participatory role in the research and immerses him or herself in the study for several weeks, months, or even years. Case studies, rather, can be observed from the outside and might only entail a brief period of time.
Lauer and Asher give these suggestions for ethnographies:
Things that should be a part of a good ethnography: -minimum of overt intervention by the researcher
-based on extensive observations, researchers generate hypotheses
-researchers validate hypotheses by returning to the data
-thick descriptions—detailed accounts of writing behavior in its rich context
-researchers identify and define the whole environment they plan to study
-triangulation occurs: there is a mapping of the setting; selecting observers and developing a relationship with them; and establishing a long period of investigation.
-validity comes from a continual reciprocity between developing hypotheses
about the nature and patterns of the environment under study and the regrounding of these hypotheses in repeated observation, further interviews, and the search for disconfirming evidence
-researchers must be careful to report the types of roles they played in the study.
Their perspective becomes an important part of the environment studied.
-three kinds of notes should be taken: observational, methodological, and
theoretical
-there must be reliability amongst data coders and the testing of schemas in new
environments
-reporting of conclusions that suggest important variables for further
investigation.
-findings are not generalized beyond subjects of study
-study is replicable and would produce the same or similar variables for
other observers and those variables would remain stable over time
What to look out for/problems to avoid when doing an ethnography:1. data overload
2. tendency to follow first impressions
3. forcing data to confirm hypotheses
4. lack of internal consistency, redundancy, or focusing too much on novel information
5. missing information
6. base-rate proportion—basing impressions on a small population, ignoring total size of population
7. genreralizations
Strangely, though, based on these recommendations, I found myself thinking that of the published ethnographies that we read, only one really closely followed these suggestions, and it was the one I would have thought seemed the most un-academic and useful had I not read Lauer and Asher’s article. To me it seems odd, but the almost creative non-fiction piece by Carolyn Ellis seems to most closely adhere to the ethnography description.
Shattered Lives: Making Sense of September 11th and Its Aftermath – Carolyn Ellis
The first half of this story is very much a creative non-fiction piece, simply immersing the reader in a narrative about personal experience. However, following the recommendations of Lauer and Asher, it appears that this was, in fact, an effective ethnography. The latter half of the article grounds her experiences in theories that have been tested and defined. She provides a thick description of the entire environment—Washing D.D., New York, Charlotte, airports, rest homes, etc. She validates her hypotheses about what she is experiencing by the theories she quotes and she specifically details her role in the ethnography—she is victimized in unique ways and is an airline passenger during the attack, only on another airplane. What she experiences is never generalized, though it is conceivable to think that what she writes could/would be replicated theoretically in others’ experiences as they went through the same moment in similar situations. Ellis has provided us with something useful and replicable, and has thus made for an effective enthnographic study.
The Social Life of an Essay: Standardizing Forces in Writing – Margaret SheehySheehy starts off by explaining her role as researcher in a classroom of Seventh-Graders that will be assigned to design a new school in which they will be asked to attend. She, as Lauer and Asher recommend, explains her relationship with the two instructors. She explains the concern for standardization as described by theorists such as Foucault and mentions that she risks “perpetuating Street’s concern that literacy has been constructed as an autonomous, a-cultural ability to produce and reproduce a seemingly stable set of characteristics that looks like an essay” (336). The study will research the forces of standardization in youths’ essayist writing at school. It was determined in a previous ethnography by Shuman that students were “dependant on the community in which texts were produced” (337), that they were dependent on patters of exchange, notions of audience, and situations of use of texts. To determine the standardization of the writing in this classroom, Sheehy examined the articulations of relations across the 8-week Building Project.
Sheehy asks when doing this research project: “What are the standardizing forces as work here, and what are the ones that stratify?
A description of the environment and participants (students) is given, naming specific demographic information, the number of students, the allotted time for teaching each day. However, there is ambiguity as to what exactly is going on here and what Sheehy is really doing. A more thick description here would be helpful and less description later about what students actually said would be good. Sheehy explains her role and awkward position as researcher in the classroom. Established triangulation occurred as Sheehy mapped the environment, got to know her subjects and observers, and established a long period of investigation. Sheehy codes data by interpreting what students talked about regarding the School Board’s decision to rebuild the school. A great deal of the article is spent describing the student comments in two levels of analysis observed: 1)production, consumption, and distribution, and 2) Centripital and centrifugal forces in writing.
Ultimately, I was left thinking there was strong potential here—Sheehy gives good theoretical background to support what she is attempting to argue. However, when I think in terms of replicating the project, it is difficult to understand what she and the teachers were actually doing in the classroom. Less time spent on specific data and a much more thick description of what was going on and how the interactions between teachers, researchers, and students is necessary. Honestly, I had a difficult time understanding what really was researched.
Writing in an Emerging Organization – Stephen Doheney-FarinaExigency is stated that professional communication students need to learn how complex social contexts can and should affect their writing in the workforce. Author asks: how are writers’ conceptions of rhetorical situations formulated over time? And how do writers’ perceptions of their social and organizational contexts influence the formulation of these conceptions of rhetorical situations? What are the social elements of writers’ composing processes? How do writers perceptions of their organizational contexts influence these processes? How do writing processes shapt the organization structure of and emerging organization?
Hypotheses: 1) rhetorical discourse is situated in time and place and is influence by exigence, audience, purpose, and ethos; 2) the rhetor conceives of these situational factors through interactions with persons, events, and objects that exist external to the rhetor; 3) the researcher attempts to explore human interaction as it is evident in social and cultural settings; 4) a microscopic investigation of important parts of a culture can elicit an understanding of that culture; 5) individuals act on the basis of the meanings that hety attribute to the persons, events, and objects of their worlds; 6) researchers seek diverse interpretations of the acts under study; 7) the researcher is the primary research instrument, and as such he or she must play a dual role.
Data recorded in four ways: field notes, tape-recorded meetings, open-ended interviews, and discourse-based interviews. Status of startup company given, explaining crisis of not generating revenue and the rhetorical decisions that had to be made in the writing of a business plan to be given to investors. The writing of the Plan required agreement upon writing strategies, which caused hierarchies to change and cooperation to happen, thus changing the company and the writing process. Details are given to describe company’s woes and triumphs, primarily related to power struggles.
Theories are described to analyze the findings. Implications for teaching is given.
Overall, the story that is given is interesting and has potential to teach and instruct. However, this “ethnography” doesn’t closely follow Lauer and Asher’s suggestions. First off, little is known about the researcher and his direct involvement during the research process. All that is known is that for several weeks he met with organization members to observe and interview. It is difficult to comprehend his rapport with the company members. What he does do well is validate his hypotheses by returning to his observations. The problem here, though, is that data doesn’t seem to be coded and there is no inter-rater reliability. Also, little is known about the kinds of notes he took—are they observational, methodological, or theoretical? Mostly, they seem methodological. Variables are certainly discovered that could be used for further research—rhetorical writing strategies in Business plans, for one—but Doheney-Farina problematizes his findings by suggesting that what he found should apply to the classroom, thus indirectly generalizing what was found. Finally, it is hard to know if this study is replicable because little was explained about the methodology.
Learning the Trade: A Social Apprenticeship Model for Gaining Writing Expertise--Anne BeaufortResearch questions are given: what defines the social status of texts and, by association, the social status of writers within discourse communities? If writing is a social act, what social roles aid writes in learning new forms of wring? Is there a developmental continuum for writers who are in the process of being socialized into new discourse communities?
Much theory is reported about the social experiences that affect student behavior and learning processes. Description of the site of ethnographic study given: a nonprofit organization in the heart of an urban area. Two newcomers to the organization are named as the subjects that will be interviewed over a year-long process. Data was collected through a series of interviews with several workers, mostly with Pam and Ursula, who claim to spend 50% of their time at work writing. Interviews were audiotaped and all writing was photocopied, including drafts and revisions. Field notes, interview transcripts, and writing samples for patterns and themes in relations to social roles for texts were taken. Triangulation is described: 1)data sources were compared with each other, 2) caparison of difference responses of the informants over time, 3) and solicitation of informants’ response to drafts of the research report.
Findings are given but at first they don’t seem very ground-breaking. It is found that more difficult, rhetorically driven writing (like large government grant proposals) were given to more experienced writers who ranked higher in the organization’s hierarchy, whereas those simpler tasks were given to the less experienced employees. A detailed description of the writing tasks that were performed, primarily by Ursula and Pam, is given. It is shown that there are five areas of context-specific knowledge that the expert writer or older had acquired: discourse community knowledge, subject matter knowledge, genre knowledge, rhetorical knowledge, and task-specific procedural knowledge. Six of the writing roles observed—observerer, reader, clerical assistant, proofreader, grammarian, and document designer—did not require context-specific knowledge.
Overall, theoretical approaches are taken to observe and analyze data. However, the information seems fairly useless and obvious. The main findings seem to indicate that writing responsibilities increase in importance to the organization as local knowledge increases while the employee moves from novice to expert writer in the company. Doesn’t that seem obvious? Most problematic, however, is Beaufort’s attempt to generalize her findings, even saying that “it is likely that most of the elements needed to duplicate the kind of writing apprenticeship that occurred at JRC are readily available in other workplace settings.
Thursday, February 19, 2009
Week 7: Survey and Sampling
Lauer and Asher claim that surveys are useful when a researcher needs to determine something that can be assumed (with high confidence) as representative of a large group. Whereas case studies examine in detail an isolated group in order to name and describe variables for that group, surveys that are “statistically designed…can be considered representative of the entire population” (54). The greatest benefit with surveys, if done correctly, is that a researcher can gain valuable, generalizeable information that can be applied to a large population without the time, effort, and cost of meeting with and interviewing everyone in that population. The question, then, that I would raise from this information that Lauer and Asher present is this: is the difference that separates survey/sampling research from case studies the fact that surveys have the potential to be generalized if the confidence limits are strict enough whereas case studies are never generalizeable?
The selection of subjects for surveys is both important and difficult. Subjects are chosen by random sampling but with a group large enough to represent the population in order to have strict confidence limits. I can imagine this is very difficult. One of the greatest hindrances to selecting an effectively representative subject pool is the fact that researchers must rely heavily on the subjects’ willingness to take the time to fill out a survey. Without incentive—which often includes cost or “extra credit”—few people feel motivated to fill out a survey. Even with incentive it can at times be challenging. This is obvious when we look at Rainey et al’s “Core Competencies” article where they invited 587 people to participate and only 47 initially responded. This poor showing makes the information less generalizable, thus requiring the authors of the article to apologize for their data by stating that “the data can be assumed to be suggestive for the profession, if not representative because of the small, non-probabilistic sample.” Of course, this doesn’t discredit what they learned from their study, and valuable information was still taken. However, this then turns more to a case study of willing managers rather than a sampling that can be representative—which is the purpose of survey and sampling, is it not?
Three types of data can typically be counted: nominal, interval, and rank order. These data can be viewed easily and effectively by using the K data matrix, which aligns subjects with the variables. Nominal data is simply counting things like, as L&A show, the number of comma faults in a composition. Interval data show things like scores that have numerical intervals between them—test grades and the like. Rank order data assigns ranks from 1 to ‘n.’
The selection of subjects for surveys is both important and difficult. Subjects are chosen by random sampling but with a group large enough to represent the population in order to have strict confidence limits. I can imagine this is very difficult. One of the greatest hindrances to selecting an effectively representative subject pool is the fact that researchers must rely heavily on the subjects’ willingness to take the time to fill out a survey. Without incentive—which often includes cost or “extra credit”—few people feel motivated to fill out a survey. Even with incentive it can at times be challenging. This is obvious when we look at Rainey et al’s “Core Competencies” article where they invited 587 people to participate and only 47 initially responded. This poor showing makes the information less generalizable, thus requiring the authors of the article to apologize for their data by stating that “the data can be assumed to be suggestive for the profession, if not representative because of the small, non-probabilistic sample.” Of course, this doesn’t discredit what they learned from their study, and valuable information was still taken. However, this then turns more to a case study of willing managers rather than a sampling that can be representative—which is the purpose of survey and sampling, is it not?
Three types of data can typically be counted: nominal, interval, and rank order. These data can be viewed easily and effectively by using the K data matrix, which aligns subjects with the variables. Nominal data is simply counting things like, as L&A show, the number of comma faults in a composition. Interval data show things like scores that have numerical intervals between them—test grades and the like. Rank order data assigns ranks from 1 to ‘n.’
Friday, February 13, 2009
Week 6: Case Studies
In composition and communication studies we argue and fight over the subjectivity of things like assessment, “good” versus “poor” writing, and the degree effectiveness of rhetoric and persuasion (among, obviously, many other things that we argue about…) Subjectivity creates a challenging quagmire within fairly abstract fields such as English, writing, public speaking, and other forms of human communication. Unlike scientific inquiry, where generalizable answers often reign in importance, humanities-based inquiry must develop understanding through much less generalized information. Hence, our need for case studies and qualitative research. While much of the information gathered from case studies is rarely generalizable, and much too difficult and expensive to acquire in mass quantities, it can nonetheless be very valuable—insightful and influential. Thus, when we approach questions about pedagogy, assessment, and effectiveness of writing, qualitative case studies are not only appropriate, but very necessary. They do, after all, spark a series of questions that continue investigation into difficult and complex questions.
The selection of subjects for such studies should invariably correlate with the research questions at hand. Considering Flower and Hayes’ research on pregnant pauses, it would have made little sense, for example, to only choose novice writers to determine where and how pauses affect the overall quality of writing. Likewise, in Flower, Hayes, and Swarts’ research design, choosing subjects with little or no advanced education or workplace experience would have likely produced much less useful results. One thing that is clear, though, is that when we write up a research report, we must clearly define who our subjects are. Without this descriptive information, it is difficult for the reader to lend credibility and usefulness to the results.
The most complicated—and important—component of qualitative research is defining an effective and reliable method of collecting and coding data. While there is no one correct method—it will change depending on the project—there are myriad of ways to do it incorrectly. Again, it is critical that the methodology be VERY clearly defined and the results elaborated on. The research must be tried and tested or supported by previous studies that have proved effective coding strategies. This was the major problem of Brandt’s research project about literacy and economic change. Because little was mentioned about research methodologies and the coding of variables, her information ultimately feels unsupportive and useless—merely interesting stories. It becomes clear that her findings could prove just the opposite if she merely selected two different people under different socio, political, and economic circumstances to interview.
Generalizations and case studies don’t often find congruence. At least not to broad generalizations over very large populations. That is not to say, however, that what is learned from a case study doesn’t produce valuable results. What is learned from a case study, if it followed strict and effective research methodologies, can be applied and practiced in order to continue learning how to achieve better results. Case studies are important to humanities and social sciences because they present insight into the complexity of the human being that must be studied in order to move forward in our understanding of any of the fields contained within.
The selection of subjects for such studies should invariably correlate with the research questions at hand. Considering Flower and Hayes’ research on pregnant pauses, it would have made little sense, for example, to only choose novice writers to determine where and how pauses affect the overall quality of writing. Likewise, in Flower, Hayes, and Swarts’ research design, choosing subjects with little or no advanced education or workplace experience would have likely produced much less useful results. One thing that is clear, though, is that when we write up a research report, we must clearly define who our subjects are. Without this descriptive information, it is difficult for the reader to lend credibility and usefulness to the results.
The most complicated—and important—component of qualitative research is defining an effective and reliable method of collecting and coding data. While there is no one correct method—it will change depending on the project—there are myriad of ways to do it incorrectly. Again, it is critical that the methodology be VERY clearly defined and the results elaborated on. The research must be tried and tested or supported by previous studies that have proved effective coding strategies. This was the major problem of Brandt’s research project about literacy and economic change. Because little was mentioned about research methodologies and the coding of variables, her information ultimately feels unsupportive and useless—merely interesting stories. It becomes clear that her findings could prove just the opposite if she merely selected two different people under different socio, political, and economic circumstances to interview.
Generalizations and case studies don’t often find congruence. At least not to broad generalizations over very large populations. That is not to say, however, that what is learned from a case study doesn’t produce valuable results. What is learned from a case study, if it followed strict and effective research methodologies, can be applied and practiced in order to continue learning how to achieve better results. Case studies are important to humanities and social sciences because they present insight into the complexity of the human being that must be studied in order to move forward in our understanding of any of the fields contained within.
Friday, February 6, 2009
Week 5: CITI Training and Internet Research
Week 5: CITI Training and Internet Research
I love the quote posted on the CITI Internet Training site: “The Internet, like any other society, is plagued with the kind of jerks who enjoy the electronic equivalent of writing on other people's walls with spray paint, tearing their mailboxes off, or just sitting in the street blowing their horns.”
Online communication is unique. It is unthreatening, in many ways, for the participant. In many conceivable studies, the participant could be absolutely anonymous to the researcher. This means two especially important things: 1) participants may be much more honest because they feel no threat of their information leaking—this is the “great advantage”; 2) participants may do, just as the above quote suggests, attempt to destroy the researcher’s objectives. There is a unique sense of empowerment on the internet—power to pose as someone else; power to defame and destroy; power to deceive. While these powers are certainly possible in the physical world as well, they are much easier to manage online. Thus, participants may have an uncontrollable urge to exercise this power.
I find it interesting, though, that with internet research an informed consent document can be waived if “study participation presents minimal risk of harm to the subject….” Who is the determinant of this? The CITI training didn’t elaborate, but I would hope that there is a board of reviewers, at the very least, that would help determine whether or not the research poses more than “minimal risk.” So many researchers are so adamant about what they are doing that they can justify just about anything.
The greatest issue appears to be privacy. While in the real world we may be able to generally classify private as “within a home or church,” online—where literally millions of people can view what a person does—could still be considered private. At least, intended to be private. Consensus has not yet arrived in this blurry area.
The issues brought up—researchers and participants alike using deception; underage participants; assessing risk in a complicated technological society; protection of data; and communication between researcher and participant—ultimately, though, like any other form of research, delve into ethical questions. How can we assure that the participant won’t be hurt mentally or physically? The researcher must always review these considerations before attempting to acquire information. And with the internet, they must be aware of technological hurdles and risk of leaking information.
I love the quote posted on the CITI Internet Training site: “The Internet, like any other society, is plagued with the kind of jerks who enjoy the electronic equivalent of writing on other people's walls with spray paint, tearing their mailboxes off, or just sitting in the street blowing their horns.”
Online communication is unique. It is unthreatening, in many ways, for the participant. In many conceivable studies, the participant could be absolutely anonymous to the researcher. This means two especially important things: 1) participants may be much more honest because they feel no threat of their information leaking—this is the “great advantage”; 2) participants may do, just as the above quote suggests, attempt to destroy the researcher’s objectives. There is a unique sense of empowerment on the internet—power to pose as someone else; power to defame and destroy; power to deceive. While these powers are certainly possible in the physical world as well, they are much easier to manage online. Thus, participants may have an uncontrollable urge to exercise this power.
I find it interesting, though, that with internet research an informed consent document can be waived if “study participation presents minimal risk of harm to the subject….” Who is the determinant of this? The CITI training didn’t elaborate, but I would hope that there is a board of reviewers, at the very least, that would help determine whether or not the research poses more than “minimal risk.” So many researchers are so adamant about what they are doing that they can justify just about anything.
The greatest issue appears to be privacy. While in the real world we may be able to generally classify private as “within a home or church,” online—where literally millions of people can view what a person does—could still be considered private. At least, intended to be private. Consensus has not yet arrived in this blurry area.
The issues brought up—researchers and participants alike using deception; underage participants; assessing risk in a complicated technological society; protection of data; and communication between researcher and participant—ultimately, though, like any other form of research, delve into ethical questions. How can we assure that the participant won’t be hurt mentally or physically? The researcher must always review these considerations before attempting to acquire information. And with the internet, they must be aware of technological hurdles and risk of leaking information.
Subscribe to:
Posts (Atom)