Thursday, April 2, 2009
GOOGLE = 466453
For broader questions, try Cha Cha - this service actually uses live people to find answers to anything and then text you back - and this service is free as well.
I learned about this in a TED talk given by David Pogue, tech critic for the New York Times.
Tuesday, March 31, 2009
1 carton sour cream
1 carton guacamole dip
1/2 pkg. taco seasoning
1 bunch green onions
12 oz. mild Cheddar cheese
2 lg. tomatoes, diced
1 can chopped black olives
In large container, spread in layers:
sour cream mixed with 1/2 package taco seasoning
shredded Cheddar cheese
chopped green onions
chopped black olives
Qualitative Analysis: Who’s teaching, who’s learning? "Analyzing the professional growth of graduate student tutors"
Looking at a group of graduate students who were involved in one to one tutoring sessions with struggling readers, this study seeks to understand the impact of this experience on their own learning experience as professional teachers.
This research question is broad enough to capture the aim of the study, and at the same time, it is specific enough to narrow down the field of the study.
This study used Keene and Zimmerman’s notion of schema theory as a basis for a framework to organize its finding. Three main domains that the researchers were interested in included teacher learning within the tutoring situation, teacher learning within the classroom setting, and teacher learning within the wider educational community or ‘real world’.
Overall, this study provides interesting insights regarding professional growth of graduate student tutors. As a reader, I found it especially interesting that despite of their experience in teaching a class, teachers in this study still found the whole process (one on one tutoring) to be challenging. This realization put into question the ‘teaching’ that is applied to regular classes as whether or not it actually takes into accounts the ‘student factor’ in learning process. In addition, from participants’ testimonies and reports, it is clear that most participants gained significant learning experience throughout the study.
The context in which the study was done is explained thoroughly in the paper (place, setting, time, schedule, structure, teaching model used, etc), and multivocality of participants across different settings (for example: most participants agree that teachers should answer to students’ individual needs, but at the same time admitting that it is not reflected on their current practice in the classroom) were addressed. However, there is only little information regarding the context of the study, including the context of the participants (tutors) and the context of the tutees. More information regarding the participants and their history (what class they are teaching, whether or not they are currently teaching, how long they have been teachers, etc) might provide us with deeper insights especially in interpreting their perspectives. Similarly, while this study might not be interested on the tutees, it is still useful to know a little background about the students since they are still an important part of the study.
Researcher Positioning and Reporting Style
The first author of this study is the instructor of the course in which this study was conducted on. To prevent bias and conflict of interest, the analysis of data was delayed until after students’ grades were submitted. Regardless of this effort, the threat of bias cannot be eliminated due to the nature of data collected and the nature of professor – students relationship. Therefore, it might be more informative for readers if researchers explicitly identify possible biases in the data/findings.
While the first author of this study was actually linked very closely to the study and can be considered to ‘in’ the study, the result was reported in a non-personal and a structured way, based on themes that emerge across all data collected. The researchers started out with three domains of interest in mind and then seek to find different themes that emerge within each domain. Because the finding was organized to fit an existing framework, as a reader, I found it easy to understand the information provided.
There are 6 different data sources for this study. They are written reflective analyses of teaching audiotapes/videotapes, email messages to the instructor across the course of the second semester, written reactions to professional readings, email messaging among class colleagues about insights gained from professional readings, final reflections form tutees’ case-study reports, and reflective essay detailing understandings about struggling readers. While there was no explicit explanation of triangulation process within the study, it is clear that similar themes emerge across different data sources. This is evident in quotes provided for each theme. As we can see, support for each theme does not come from one data source only. Within each theme, there are indications that findings were drawn from at least 2 or 3 different data sources.
Member Checking and Outlier Analysis
There is no information regarding member checking in this study. In addition, there is no indication that there was any outlier in this study. All participants seemed to agree with each other. One of the possible explanations is that because this study included discussion among participants as the part of the study. By conducting regular discussion, it is not surprising that all participants seemed to agree with each other regarding the benefits of this one on one teaching experience for their professional growth.
There was no long term observation in this study. While there is an indication of positive changes across time, this study was mainly interested in the learning growth within a semester long period.
No representativeness check was done in this study. Given the design of this study, it is really hard to find representative sample. The fact that the participants were in graduate school might make them distinguishable from a bigger teacher population.
No coding check was done in this study. While there is no quantitative analysis of inter-rater agreement, since this study involved two different authors, it is assumed that there are some sorts of agreement between two authors in interpreting their findings.
THE RESEARCH REPORT
The report of the study is well documented which makes it easy for readers to understand the finding. Using an established framework, Atkinson and Colby (2006) organize their findings into 3 different domains: within tutoring situation, within classroom setting, and in wider educational community and the real world. Within each domain, several themes are reported.
All themes are clearly stated and there is no redundancy in the coding. Within each theme, the researchers include necessary dialogues and direct quotes from several different participants to support assertions. In addition, several quotes from several data sources are included to support a theme. Furthermore, an illustration of the context within which these themes emerge is drawn to help readers to understand it better.
Interestingly, there is no indication of any unexpected or discrepant findings. There are two possible explanations for this. The first explanation is that there was simply no discrepancy within the data collected. The alternative explanation is that the researchers might have treated any discrepancy as an outlier, and have excluded it from the report of the finding.
The findings are discussed in relation to other research in the area. In discussion section, the researchers link their findings with existing literature and discuss the possible implication and suggestions for teachers in general.
While one of the researchers was very closely involved in the whole tutoring period, they do not provide any explanation on how their own perspectives may influence the interpretation of the data. Addressing personal assumptions and possible bias can strengthen the reliability of the study.
Wednesday, March 25, 2009
Here is the list for the Pot-Luck:
- Mariska (muffins)
- Al (smoked trout)
- Ken (seven-layer dip & tortilla chips)
- Inder (spinach dip & bread)
- Angela (potato salad)
- Carolyn (spring rolls)
- Pam (veg. pakoras)
- Frank (pastistio)
- Evelyn (dumplings)
- Maureen (something Canadian)
- Yuli (juice & pop)
Analysis of “Peer Interaction and Critical Thinking: Face-to-face or Online Discussion?” by Jane Guiller, Alan Durndell and Anne Ross
In their study “Peer Interaction and Critical Thinking: Face-to-Face or Online Discussion?”, researchers Jane Guiller, Alan Durndell and Anne Ross compared live discussions with online discussions and how the two modes of communication influence critical thinking skills and how they might be used to augment them. A 21-point scale was used to quantitatively measure the level or depth of critical thinking in those discussions within the framework of a repeated-measures design. The researchers concluded that both modes of communication should be integrated to provide optimum conditions for advancing critical thinking as it was found that while online discussions afforded depth of critical thinking unobserved in the live discussions, there was more “brainstorming” and collaborative learning and thinking happening during the face-to-face discussions.
A clear and immediately evident strength of the current study is the concrete definition of terms central to the research. The key terms “collaborative learning” and “critical thinking” are defined for the purposes of the research almost immediately into the introduction. Prior studies’ definitions of critical thinking that more or less suggest it is a variety of skills that involves developing reasoned argumentation in a social context are referenced giving credence to that of the current study. The introduction also lays the foundation for the study regarding the collaborative aspect of learning through discussion and cites ample previous studies to support both the terminology and method of quantitative measurement used.
Although both the researchers and subjects of the study have psychology backgrounds, there is no discernable bias toward or against the institutional body of thought nor is there one to be found betraying their connection to Glasgow Caledonian University where the work was done. They did indicate a belief in the benefits of both online asynchronous discussions and live discussions in developing critical thinking skills. In addition to the subject of the study being two modes of collaborative learning and teaching, citations of Vygotsky (P. 188) and discussion of guided practice in the context of “scaffolding” shortly thereafter suggest a social constructivist theoretical orientation. There is no justifiable reason to believe, however, that these biases toward a constructivist theoretical framework and a belief in the benefits of online and live discussions to critical thinking skills development lend themselves to a preferred outcome regarding the two modes of communication. From that, it is reasonable to conclude that the reliability and validity of the research is not adversely affected by those same biases.
Though not included in the introduction, the hypotheses merited its own sub-heading under a heading called “The Present Study”. All three hypotheses are clearly delineated and quantifiably measurable using the 21-point criteria originated by Anderson et al. in 2001 and outlined by the authors in the attached appendix. (P. 189) The researches in this study opted to measure the discussions using the Anderson methodology as it seeks to reveal critical thinking indicators which is precisely what the researchers have set out to do. Using a more precise tool to measure their results adds significant validity to the experiment.
The authors of the study successfully advocated the case for their study. After reading the introduction, one has a very clear understanding of the problem, the necessity of the current research, the definition of salient terms, and the means by which the current research will achieve the goal of further understanding by filling gaps and addressing inconsistencies in prior research including what is to be measured and how.
The independent and dependent variables are explicitly, clearly and operationally defined. The independent variable was the two modes of the discussion: face-to-face (condition 1) and online (condition 2). To assess the two groups, the current study employed a repeated-measures design which was an appropriate design choice for the research. By employing this design, the researchers were able to avoid the threats to internal validity that plague similar research. Problems such as regression, mortality, and maturation, which arise when comparing groups, were eliminated via the design (Creswell, 2008). By counterbalancing the procedure conditions (face-to-face groups met twice to allow for further thought on their subject as would be afforded by the asynchronous online discussion), a great degree of internal validity was preserved. Participants were made privy to the purpose of the design and the focus was on critical thinking more than the mode of discourse thus leaving little room for compensatory rivalry, compensatory equalization, or resentful demoralization among the participants.
Experimental procedures were somewhat ambiguous on the face-to-face discussion side. What were the “rules” of the live discussions? What was the structure? Did instructors participate? Although the material and subject to be discussed and the criteria the students would use to evaluate it was given, the specifics of the actual structure of the forum for the face-to-face discussion was not addressed. How formal was the discussion group? Was it chaired? These are valid questions, as the structure of the face-to-face discussion would most likely affect the level of engagement of the participants, which would in turn affect the results of the study. The online condition was briefly outlined but similar questions regarding instructor participation, structure, and formality exist there as well. This might well present an opportunity for further study as more specific control over these conditions might produce better or more accurate and valid results. Outside of the questions outlined above, the procedure for the study was clearly illustrated and, aside from issues arising from those same questions, the study would be more or less simply replicated.
The study concludes with suggestions for further research and implications for further research. They explicitly state that, “further research is required in order to investigate the extent to which these results extrapolate to other collaborative critical thinking activities and across disciplines.” (P. 198) Another suggested avenue of research is how critical thinking development through such blended learning tasks like online or live discussions may transfer to other tasks requiring similar skills. Regarding the implications for daily practice, the study concludes that a “combination of both face-to-face and online discussion seems to be most beneficial to students.” (198)
Teachers in classrooms could model their lesson after the lessons learned in this study. Based on what Guiller, Durndell, and Ross found, teachers could structure their class discussions so that there is a live, face-to-face initial discussion with online discussion as a follow-up. This practice would be informed by this study. Additionally, the 21-point assessment tool could easily be adapted for use in the classroom to help advance the students’ critical thinking skills.
Tuesday, March 24, 2009
Sunday, March 22, 2009
The qualitiative paper I analyzed is from an investigation carried out by Jeong-Bae Son from the University of Southern Queensland. This is study titled the ‘Learner Experiences in Web-based Language Learning (WBLL)’ (Son, 2007) is published in the Computer Assisted Language Learning Journal. This research study explores the language learning experiences of English as a second language (ESL) learners through the use of Web-based programs. For this post I have focused on the data collection methods carried out by Son.
Multiple Data Sources & Triangulation
Son used ‘multiple data sources and triangulation of data collection methods to develop a rich description and discussion of learner experiences in WBLL (p. 22).’ For instance, a pre-questionnaire was used to gather background information on the subjects such as their age, gender, and previous experiences with computers. However, the pre-questionnaire results were not presented to the reader (p. 23). These results would have given a better overall sense of the learners’ comfort with language learning, computers, and Internet navigation to the readers. A final post-questionnaire was administered to the learners after the last WBLL session. This questionnaire consisted of 11 closed-questions and five open-ended questions. It is important to note that Son provided only positively phrased questions may have skewed the results as students may have answered ‘strongly agree’ or ‘agree’ to all the questions just to finish the survey faster (p. 29). It is important for researchers to include questions that are phrased negatively to encourage respondents to read the questions (Creswell, 2008). For instance, instead of ‘I was comfortable using the web during the web activities,’ the question could have been restated as, ‘I was not comfortable using the web during the web activities (p. 24).’
Another source of data collection was the observation forms. The first form consisted of eight WBLL sessions being videotaped for future playback and analysis. The second form was in real-time where a research assistant recorded on-task and off-task behavior both during online and off-line activities. This allowed the researcher to document the students’ usage of time when completing tasks (p. 24).
The last form of data collection was through interviews conducted one week after the WBLL sessions. Each participant had an interview with the classroom teacher. ‘The purpose of the interviews was to cross-check students’ responses to their post-questionnaire and to seek more information which was not possible in Section 1 of the post-questionnaire (p. 24).’ This additional interview step allowed for the evidence to be triangulated through the different forms of data – observational field notes and interviews. Evidence was further corroborated through different individuals such as the research assistant (real-time observations), and the students (pre- and post-questionnaires) (Creswell, 2008, p. 266).
Member checking to represent the emic perspective, the view from the inside of the culture, (Creswell) was not observed in this study. It is important for researchers to have the subjects review the statements made in the report to ensure accuracy and completeness (Gall, 2003). The process of data analysis is iterative as the researcher attempts to ask the subjects the same types of questions through various forms such as questionnaires and interviews, as well as through observational forms. When analyzing Table 2 (p. 29), there did not appear to be any outliers in the data. For instance, none of the students responded ‘strongly disagree’ only questions five and eight had three or more students responding ‘disagree’ but this observation was not clearly explained in the analysis. A better analysis was performed on the open-ended questions where a brief explanation was provided for why students may have answered negatively to a particular question (p. 30). Thus there was one unexpected or discrepant finding that the author needed to explain. This was in relation to the ‘one-third of the students [who] disagreed that their experience in WBLL made their language course more interesting (p. 33).’ Son explained this by suggesting that the course content and web materials should be tailored to the students’ needs to ensure students are comfortable and confidence to proceed with the activities. Also, he pointed out the students’ computer skills need to be considered when expecting a task to be completed. The findings of a study are strengthened when an author is able to discuss outliers.
Moreover, the study has not carried out any long-term observations on WBLL with the ESL students, other than the one-week follow-up interview after the last WBLL session. The author does mention that ‘it was difficult to measure actual learning outcomes in a statistically meaningful way over such a short period of time (p. 34).’ Furthermore, a representativeness check is not done, this check would have determined if findings are typical of the situation (Gall, 2003).
Friday, March 20, 2009
Within a conceptual framework of analysis based on verbal interactions, which are developed from early theories of metacognition, Larkin (2006) takes a phenomenological perspective in order to investigate the dynamic interaction between the development of the self and the development of metacognition of 2 six-year-old children. Larkin also completes an in-depth grounded analysis based on Piagetian stage development theory combined with a broad Vygotskyan framework emphasising the social construction of knowledge and thinking.
Larkin uses the audio recordings of the dialogue of two subjects participating in the CASE programme as well as the data collected from 9 observations of the subjects as they collaborated in a group of six students on CASE activities. Larkin provides a detailed contextual description of the educational environment of the 2 six-year-old subjects, the teacher’s training and skills, and the educational program used. Larkin highlights the diverse perspectives of the subjects and provides tacit knowledge of the subjects’ non-verbal communication. For example, Larkin supports the transcripts with a description of subjects’ hand gestures, sighs, and facial expressions.
Larkin (2006)’s study is preceded by Venville, Adey, Larkin and Robertson (2003)’s qualitative study in which Larkin, as co-author, examines the effectiveness of the CASE programme in fostering thinking through science in the early years of schooling. However, Larkin does not inform the reader of her co-authorship of Venville et al. She therefore fails to identify any assumptions, beliefs, values or biases that may have influenced her interpretation of the data obtained from the audio recordings and observations in her 2006 study.
Larkin (2006) presents the data in a clear format. However, personal accounts from the participants are not presented and member checking does not occur. The three clearly stated major themes in Larkin’s study are based on the person, task, and strategy variables described in Flavell (1979)’s model of cognitive monitoring. Data is collected over the course of a school year, which increases the reliability of the findings of the case studies. The major themes are used as category codes to code the children’s interactions.
Lastly, interconnecting themes occur between the two case studies.
Inter-rater reliability of the coding system is undertaken by two researchers not connected to the CASE programme, on three different transcripts of children working on the CASE activities. A reliability rating of 97% is achieved and disputed areas are not included in the study. Larkin (2006) states that the two six-year-old students, a boy and a girl, chosen for the case studies are typical of the CASE participants in that they engage with their group and in the CASE activities presented. However, she acknowledges that if two other children had been chosen the descriptive data would be different.
A narrative approach is used to examine the data of both case studies. Larkin (2006) presents the data using the same layering approach for both case studies. She begins with a description of the subject’ s social skills, followed by a transcript of the group’s dialogue. She supports the dialogue with tacit knowledge provided by field notes taken during the observations with regards to the behaviours of group members as they collaborated in the CASE activities. She concludes the examination with a summary of the findings. Obtaining a 97% inter-rater reliability, using observational data and transcripts, revealing interconnect themes between the case studies, and applying a layering approach allows Larkin to partially triangulate the data. However, as mentioned earlier, she does not provide personal accounts from the subjects to validate her interpretations of the subject’s motivations and thinking that influences the subjects’ dialogue and observed behaviours. Also, it is unclear if Larkin (2006) is the non-participatory adult observer described in her study or if other(s) did the observing.
Larkin also does not provide contrary information, multiple perspectives or extreme unexplainable results. She acknowledges large sections of the recorded transcripts were not coded and were omitted, as they did not relate to the category codes. However, she does not provide a description or examples of the omitted data so the reader has no idea how this data differed from the included data. A better understanding of the omitted data and the reason it was omitted may have been a way to test and strengthen the basic findings of the study.
Larkin, S., (2006). Collaborative group work and individual development of
metacognition in the early years. Research in Science Education, 36(1-2), 7-27.
Venville, G, Adey, P., & Larkin, S. (2003). Fostering thinking through science in the
early years of schooling. International Journal of Science Education, 25(11), 1313-
The purpose of this study, according to Pérez-Prado & Thirunarayanan (2002), was to explore students’ perceptions of their own learning experiences by comparing an online course with a classroom-based course on teaching ESL strategies to teachers.
While reading through this study, I had a recurring thought; depending on how the online and classroom courses are designed, it’s possible to influence the results to ensure the outcome confirms the researcher’s bias.
Without even looking at the study, you could probably figure out how the design of each of these courses would influence the perception of each method. What if my online course had no chat or forum capabilities but my classroom course was entirely based on group-work? What if my classroom course depended solely on monotone lectures from the front of the room but my online class used an interactive game-like UI that situated the learner in the appropriate context? What if I chose clips from really good movies to illustrate points for my online course but had students read their textbooks for the classroom course?
What exactly is being compared? This is the question I found myself asking while reading this study. For example, one of the themes identified by the authors was the “importance of the affective domain in the learning process”. In this category, students in the classroom course had a lesson delivered entirely in Arabic. The purpose was to give them the experience of what it’s like to be unable to communicate or understand what is being said. Students in this class commented on how they felt overwhelmed, exhausted, lost, and confused. For the online version of this lesson students had to read about the experiences of ESL students, and imagine themselves in this situation. The researchers found that the online students were “not as affected by the experience”. No kidding! This is not a comparison of online to classroom. To create a closer comparison, the online lesson could have been delivered in Arabic asking students to type commands or click buttons.
While I understand that qualitative studies don’t require researchers to be unbiased, one still should add value to the broad base of knowledge. This study just doesn’t take me where I need to go. When I think about the challenge many businesses face today in deciding whether to move to online learning or stay with classroom-based learning, cost has two measures: the financial cost and the social cost. Calculating the financial cost is simple; add up the dollars and cents. Calculating the social cost is much more difficult. To truly understand the value of each method I need to know what will, or will not work in each setting. Only then can I truly determine which is the best tool for my organization and the type of instruction I want to deliver. But in fairness to the authors, how does one determine what is an equivalent in this kind of setting? And if the researchers aren’t technically savvy, how will they be able to build an equivalent online course?
Wednesday, March 18, 2009
Quantitative Review: “Effects of Training in Universal Design for Learning on Lesson Plan Development”
Universal Designs for Learning (UDL) is becoming a term used when thinking about learning in both special education and general education classrooms. In this article, the authors attempt to use evidence-based practices to address the problem that there “is a lack of scientific investigation on the feasibility, application, or use of UDL” (p. 109). Their purpose was “to determine the effects of teacher training about UDL on the lesson plan designs of special education and general education teachers in a college setting” (p. 109). The authors predict that “before UDL can have a profound impact on teaching and learning, there must be evidence that teachers can learn to use it in planning instruction for students with disabilities” (p. 109).
Now, I was a little disappointed after reading this because the quantitative research was conducted with pre-service teachers learning UDL in an educational setting rather than current teachers who are dealing daily with the ongoing demands of a classroom or special education teacher, learning UDL in a professional development program. This training occurred in an artificial environment where often the real context of time constraints, classroom management, and specific achievement levels of students on a daily basis do not exist. Therefore, it is unclear how applicable this is to the real world. For example, current teachers may plan with a real context in mind, where pre-service teachers are planning with a hypothetical context. The students were asked to work with case studies to plan their UDL lessons with, but planning for a hypothetical situation where it does not need to actually work or not is very different from planning for a real need. Therefore due to external threats to validity, it is very difficult to generalize from this study that a teacher trained in UDL in the educational system will be more likely to use UDL principles in planning to have a “profound impact on teaching and learning” (p. 109), as Spooner et al advocate.
Now, I was pleasantly surprised that the article included both special education teachers and classroom teachers, as in BC, it is usually these teachers who work together in an educational setting to plan Individual Education Plans to support students with special needs. The researchers ensured that a matching number of participants were included in both the control and the experimental groups. This is critical to guarantee that the sample was representative in both groups as general education teachers may have different experiences around their understanding and application of UDL principles, when compared to a special education teacher.
The dependent variable was clearly defined. The dependent measure of success consisted of the total score of inclusion of the three essential UDL principles found in the lesson plan. An individual score of each principle of representation, engagement, and expression was also assigned. This rubric was very interesting, and is an item that I am currently considering as a measure in my own study. Since this study’s purpose was to be a purposeful attempt to use evidenced based research, the methods section was clearly defined and could be easily replicated in future studies.
The study used pre- and post tests to measure the degree of change after the intervention. There were no threats to validity through the process of testing because participants were provided different case studies for the pre and posttest, so the circumstances from which to develop the UDL lesson plan were completely new. Due to the fact that this UDL lesson was a requirement of the course that the students were taken, all students were required to complete the lesson. So, the control group was identified, but did receive the lesson after the post-test was completed. There was a threat to internal validity due to the fact that the control group and the experimental group were easily identified. The control group arrived one hour later than the experimental group, and participated in the second part of class. This could cause diffusion of treatment as the control and experimental group could communicate. Additionally, it was the researchers who taught the lesson to the experimental group. Consequently, threats to the ecological validity of the research through the novelty and disruption effect, experimenter effect, the interaction of history and treatment effect, and pretest sensitization. Additionally, how the need of this intervention was introduced in the course prior to the intervention could have also affected the ecological validity because the experiment is dependent on how the course content was established prior to the treatment.
The researchers provide a reasonable explanation of the results. plan. However, the implication that “universally designed concepts might save teachers an extensive amount of time by creating modified lesson plans rather than changing them after the fact” (p. 114) appears to be a statement very difficult to generalize from. Spooner et al. identified that teachers were only given a twenty minute time period to complete one lesson plan. But, how realistic is this in a general education setting, where the classroom teacher is responsible for planning many lessons throughout the day?
Thus, the article “Effects of Training in Universal Design for Learning on Lesson Plan Development” is a quantitative study that confirmed teachers can learn to use the three principles of UDL in planning instruction (Spooner et al., 2007). However, it is unclear due the problems with validity and reliability if these results would continue over time, and if the lesson plans would work in a real-world context. The authors address the study’s limitations in a labeled section within the paper and from these limitations provide suggestions for future research. Future directions include researching “general education teachers who hold valid teaching licenses and look at the effects of UDL training on their previous ways to write a lesson plan” (Spooner et al., 2007, p. 115).
Tuesday, March 17, 2009
Nowadays, computer-mediated communication (CMC) use has prevalently stepped onto the stage of language education and the related researches have addressed overall aspects of its context in language learning and teaching, however, previous studies of CMC context do not comprehensively focus on the issue that one needs to see the configured context co-constructed by language learners to fully capture the complexity of CMC practices, since the context for any learning activity is a complex interconnected relationship among contextual elements of the learning environment that learners configure for learning tasks. Therefore, with the purpose of examining “how a group of ESL students co-constructed online interactions of synchronous CMC practices within the dynamics of their group, while engaging with contextual elements of their CMC activities”, the researcher explores the following questions: 1) What kinds of interactional patterns are a group of ESL students jointly constructing? 2) What kinds of interactional norms are the ESL students establishing within computer-mediated social interaction? 3) How do the ESL students utilize CMC activities for their linguistic, social and academic goals? [pg.66] The three questions are designed according to ecological perspectives of language learning, the theoretical frameworks of this research, to broadly capture the study aims. Consequently, the study helps readers bring about an understanding of the complexity of CMC context in language learning and teaching, and awaken to enlightenment that “language learning is not only an issue of acquiring linguistic forms and functions, but also of developing a new self”. [Pg.66] Besides, the findings encourage language teachers to effectively draw on CMC in their teaching practices by raising their awareness of taking such elements into consideration as learners’ proficiency in using CMC tools, class size and teacher’s role in conducting CMC activities.
Critique of the Study
The study was conducted in an intermediate adult ESL class with 16 students, including international graduate students, visiting scholars and their spouses, at a university in the northeastern United States. In order to provide a detailed day-to-day picture of the culture-sharing group developing their shared patterns of behavior, norms and utilization of the CMC activities, the researcher designed the ethnographic case study to explore an in-depth understanding of context configuration in CMC environment by ESL learners; and writing in first-person point of view helps readers feel getting close to and think in the authentic research environment. On the whole, the study is well organized with arguments logically and persuasively presented, which makes it understandable and convincing. Apart from the writing style, its validity and the report style are also examined, as follows:
(1) Validity: The researcher “was informed of the CMC activity of this ESL class after the class designed their CMC activity”, which indicated that she did not intervene in conducting such CMC activities from the beginning of the research. Then, “she was introduced to the class and got permission for her study from the students to observe all of their CMC meetings and all FtF class meetings.” (See NOTE 5, pg.80) During the process of data collection, the researcher, serving as an observer, attended both CMC and FtF class meetings without being involved in any of their activities. For example, in CMC meetings, the researcher observed the teacher managing chat sessions without logging onto the chat program herself; and in classroom meetings, she sat outside the circle observing and taking notes of their FtF class activities. [pg.69] It implied that the researcher tried to be “invisible” in order not to impose any sense of uncomfortability or any psychological burden on participants to affect their natural interaction and performance. Although the researcher does not explicitly state her assumption, beliefs or biases, the ecological perspectives of language learning, as a core construct of this study, unconsciously influenced her data collection. For instance, even though the study only focused on the participants’ online CMC activities, the researcher also collected data from their FtF meetings because “ecological perspectives are not only concerned with participants’ online lives, but also their offline lives”. [pg.70, pg.78] Therefore, the researcher’s research design and analysis was implicitly affected by the theoretical framework in this study---ecological perspectives of second language learning.
Besides, to help readers better understand the phenomenon under the study, the researcher gave a relatively comprehensive description of the research context, such as the introduction to the university program, the necessity for CMC meetings, participants’ profiles, CMC tools and CMC activities environment, classroom setting as well as online & offline meeting schedules. Meanwhile, from the data analysis, it could be seen that the researcher demonstrated openness to the possibility of multivocality by presenting diverse points of view and interests drawn from participants of different professional roles and purposes such as the teacher, graduate students, visiting scholars and their spouses. Moreover, the researcher showed sensitivity to implicit meanings (tacit knowledge) gaining from observations, interviews or surveys, such as the learners’ age and silence in communications, which have much to do with finding analysis; for instance, participants’ age characteristics (as adults) could explain the configuration of face-work norms and their silence in the online meetings could highlight the importance of the teacher’s role in CMC context.
In addition, the researcher cultivated multiple sources of data through a semester’s long-term observation from both FtF meetings and Web-based chat meetings, including electronically saved chat meeting transcripts, field notes, recorded class interactions from the FtF class meetings, formal and informal interviews with participants, surveys, e-mail exchanges between the teacher and the ESL participants. And in order to increase the credibility and validity of the research analysis, such data as field notes, transcript of recorded FtF meetings, interview data, and electronically saved chat data were triangulated to cross examine and explore the patterns, norms as well as utilization of CMC activities.
Furthermore, the researcher analyzed the data iteratively to seek inner-connections between elements to strengthen the understanding and validity of the results. For instance, after noticing some participant’s (such as spouse learners) certain behavior, the researcher came back to check their profiles such as typing skills or professional roles, and then held interviews with them for logical analysis. Besides, the researcher represented the emic perspectives by demonstrating participants’ online dialogues or quotations to increase the reliability of data interpretation; and the study was then reviewed by anonymous reviewers to ensure the credible results. (See Acknowledgements, pg.80)
However, there is no indication for outlier analysis, representativeness check and coding check, which should be conducted to make the study more valid and universally recognized.
(2) The Research Report: Under the main theme of context configuration, the analytical themes the researcher used included constructed interactional patterns and norms, configured affordances regarding the CMC environment, and utilizations of CMC activities for linguistic, academic and social goals. [pg. 70] And the data were analyzed correspondingly based on these categories with classified data supporting related themes mainly in the form of direct quotations to authentically represent the participants’ voice, so there was no obvious redundancy or overlap in the coding. Meanwhile, though there was no visual representation to demonstrate the findings, it was reasonable in the study in that the description and explanation in the form of three-step finding-evidence-analysis was already easy for readers to understand.
In addition, some findings in the study were supported by the other research studies, which increased its reliability. For instance, at the end of the study, the researcher highlighted “class size” as a critical factor in CMC discussions and the finding reflected on the conclusion of other studies, written by Kitade and Kotter, of synchronous CMC suggesting that no more than five should be in any single synchronous virtual meeting at one time. [pg.78]
Generally speaking, this study is of quality, however, some points still should be reconsidered to make the study more convincing. For example, directly jumping from the introduction to the context and the researcher’s role to the finding analysis, there was no detailed description of how the CMC activities were carried out to create an authentic environment and a holistic picture for readers to better understand the whole situation in the research. Also in the findings section, the researcher uniformly presented the findings firstly and then demonstrated evidence from the collected data to support the findings. While this form of data analysis made the researcher ignore to acknowledge some unexpected or discrepant findings irrelevant to her assertions, but just chose the evidence that could speak for them. Besides, the researcher did not identify her assumption or bias in this study; neither did she explain how her own perspectives might influence interpretation.
Therefore, the comprehensive validity checking and persuasive report styles still deserve the researcher’s attention and endeavor while re-designing this study.
1.Title: How English as a Second Language Graduate Students Perceive Online Learning from Perspectives of Second Language Acquisition and Culturally Responsive Pedagogy
2.Summary: Employing an interview method, this qualitative study is to explore the effects of online learning on ESL graduate students’ English language improvement and how cultural diversity has impacted such students’ online learning attitudes. Three researchers conducted individual in-depth face-to-face interview on 7 international ESL graduate students (from native countries of China, Korea, Japan, the Philippines, Thailand, and Russia) who have taken online courses through a major Western university. By asking 8 open-ended questions with follow-up questions, the researchers tried to ascertain the following perceptions:
- participants’ likes and dislikes about their online learning experiences
- the affects (positive and/or negative affects of online learning on English acquisition
- the affects of online learning on individual learning styles
- the affect of online learning on individual attitude, motivation, and anxiety toward learning
- how cultural differences affect online learning in comparison to face-to-face class experiences
Data analysis shows the following results and discoveries:
- Online learning environment helps improve participants’ writing and reading skills, providing with more opportunity in vocabulary-building
- Online learning environment shows no sign for participants to improve the speaking and listening skills, due to lack of audio and visual materials.
- Online learning environment causes participants confusion due to vernacular and acronym used by native English speakers
- Participants viewed language and culture as important issues in online learning environment. This makes them choose not to take more than one course at one time.
- The attitude toward online learning tended to be more positive with the increased language proficiency, number of online courses taken, and more time spent in the United States.
- Participants perceived challenges regarding culturally related difficulty with time management, lack of technological trust and/or experience, content and nature of online conversations.
3.Critique: This study shows relevance to my research interests. However, in general, this study fails to reach several criteria for a good qualitative study. Some of them hinder the readability, while others may affect its internal and external validity.
- lack of themes – In the result section, the researchers merely described the whole findings in the first paragraph, without any themes developed. It seems to me that the researchers explained specific findings in detail in the following several paragraphs. This makes reader difficult to grab the general concept of the study.
- lack of visualization representativeness – Since this study is lack of theme and there were totally 7 interviewees, I do think there is necessity for presenting some data visually (e.g., tables for participants’ responses toward each questions to not only demonstrate overall perspectives but also highlight outliers )
- lack of triangulation – The researchers did not mention much in methodology section concerning the data sources. It seems that the interviewees’ responses are the only source, which affects its internal validity.
- how to analyze / code data is unknown – The researchers did not address how to code and analyze the data (interview transcript) in any section of this study. From what was presented in the result section, the researchers seemed only directly quote relevant responses. Nor did the researchers mention if they had members check the transcript.
- lack of tacit knowledge – The tacit knowledge is invisible in this study. For one thing, the researchers did not describe how they dealt with the data in detail in methodology section. On the other hand, even though there are quotes from interviewees, the researchers did not mention the specific tacit knowledge like those unarticulated behavior or facial expressions.
- lack of interpretation from researchers – Throughout the study, I can hardly perceive researchers’ voice, but the interviewees’.
- participants representativeness – Notice that most of the participants are Asian people. This makes me doubt the participants representativeness. However, I think of some plausible explanation for this: maybe this is due to researchers’ consideration that most ESL learners are Asian people. Still, this calls for more detail.
On the other hand, this study is doing well in terms of: 1) employing lots of quotes from interviewees, which precisely demonstrated interviewees’ perspective; 2) the process for data collection and analysis is simple and iterative. The latter one is a little ambiguous, though, since the researchers did not explicitly address this. However, it might be reasonable to suppose that the researchers employed similar procedures. For one thing, this will be more convenient, economical, and time-saving. Secondly, if there were difference, the researchers would have explicitly described that.
With regards to the writing style, this study is highly descriptive, rather than narrative. I was wondering, though, if this is one of the reasons that the researchers’ voice is lost. Nevertheless, in this way the researchers seem to maintain their neutrality in this study.
Monday, March 16, 2009
The participants of the study were undergraduate students who took lower intermediate English proficiency. There were 21 students in the experimental group and 19 in the control group. The experimental group analysed a literary text using a computer concordancer whilst the control group analysed the text manually. To ensure the similarity of the treatment to both groups, the researchers used the same curriculum, lesson plans, and teacher. Students’ level of proficiency was also similar based on the placement exam given by the centre to determine their grouping.
At the beginning and end of the class the Cornell Critical Thinking Test Level X was administered to test students’ level of critical thinking. Level X is a multiple-choice test with 71 items. Each item has three response choices and the test usually allows 50 minutes for completion. The results of the pretest and posttest were then compared to see whether there were any significant differences in the variables measured.
Cornbach Alpha Coefficient, pre- and posttest means, and ANOVA were used in the statistical analysis. The findings indicate that “the use of a concordancer was found to enhance students’ ability to think critically.” Although the percentage of contribution of the concordancer to the difference in scores was relatively small, it was still significant. “The percentage is higher in students’ ability to apply deductive reasoning and to judge credibility of assertions” (p. 485).
The following are a list of the extraneous variables and I will offer my opinion of what variables affected the experimental outcome of the Daud and Husin study.
- Maturation. The variable doesn’t affect the experimental result because the treatment was conducted for only 8 hours (p. 485), and physical or psychological changes in the research participants are not likely to occur in such a relatively short time frame.
- Testing. The authors administered a pre- and posttest (p. 479) but they didn’t say whether the questions in both tests were similar. They only described what Level X looked like (p. 480). I think that the extraneous variable could influence the result because students might show an improvement, simply because of having had the experience of the pretest.
- Instrumentation. The variable doesn’t affect the outcome because the researchers used the same procedure in pretest and posttest.
- Differential selection. I don’t think the variable affects the result. The authors used intact groups, and it was noted that students in the class had a similar level of proficiency, verified by a placement exam given by their university to determine their grouping (p. 481).
- Experimental mortality. In table 3 (pre- and posttest means), it shows the total number of participants (N) of the experimental and control groups (p. 484). I note that no individuals dropped out during the experiment. Therefore, the experimental mortality doesn’t influence the result.
- Experimental treatment diffusion. The variable could affect the result because participants in the control group may have wanted to seek access to the treatment condition. There were four two-hour sessions (p. 482). However, the authors didn’t mention how many sessions in a week were conducted. It is possible that participants in the control groups had had access to the concordancer from participation in a different learning environment.
- Compensatory rivalry by control group. The authors didn’t say whether assignments had been announced to both groups, and whether the control group participants could perform better by perceiving that they were in competition with the experimental group. I think the extraneous variable could affect the result. Table 3 in the article displays that “the control group gained more in the participants’ ability to apply inductive reasoning”(p. 484).
- Compensatory equalization of treatments. Daud and Husin described their experimental design (pp. 478, 479). They didn’t use comparison groups. I don’t think the extraneous variable influences the result.
- Resentful demoralization of the control group. The authors didn’t discuss the possibility that control group participants could become resentful and demoralized, but I think the extraneous variable could affect the result. I notice that in table 3, between the pretest and posttest mean, the control group is shown to have a lower mean of credibility of assertions (p. 484).
Two threats to external validity can be described as follows:
- Interaction between setting and treatment. This treatment can’t be generalized from the setting where the experiment occurred in another setting. This research was conducted at a university in Malaysia. The outcome could be different from one carried out at elementary or secondary schools, public or private schools in other cities or countries.
- Interaction of selection and treatment. The authors realized that personological variables, such as the student’s level of proficiency, might affect the generalizability of findings from experiments. The research participants were undergraduate students who were taking lower intermediate English proficiency courses. The outcome might have been different if the research had included participants from a lower or higher English proficiency level.
Effectiveness of Interactive Multimedia Environment on Language Acquisition Skills of 6th Grade Students in the United Arab Emirates, written by Almekhlafi, was published in International Journal of Instructional Media. Nowadays, supported by Paivio’s Dual Coding Theory (DCT), the inclusion of Interactive Multimedia (IMM) in education has raised many researchers’ concern, however, from the literature review, it could be concluded that IMM may not always be effective as research results have been inconsistent. Therefore, there is a new tendency to study the effect of IMM in relation to cognitive learning styles. But in the context of UAE, research on the effect of IMM on learning has not gained enough attention yet, particularly when investigating multimedia in relation to cognitive styles. So with the purpose of investigating the effectiveness of IMM on learning ESL language skills and its interaction with cognitive learning styles: field-dependent (FD) and field-independent (FI), one main question “To what extent does IMM environment affect sixth grade students’ acquisition of ESL skills?” needed to be addressed.[pg.430] Results showed that there was no significant difference between IMM users and non-users in the overall ESL skills. However, when the participants were investigated in terms of cognitive learning styles, results showed a significant difference between FD and FI learners in the experimental group in favor of FI learners.
On the whole, this is a well organized article with each part clearly presented. There are some highlights in this study, as follows:
(1) Proper design:
This study investigated two independent variables (teaching method and cognitive learning style) working on the dependent variable (achievement in ESL skills). Because two variations (IMM versus traditional teaching) of teaching method and two variations (field-dependent versus field-independent) of cognitive learning style were manipulated at the same time, a 2×2 factorial design (ANCOVA) fitted the research. Furthermore, the cognitive learning styles (FD and FI) were conducted under each group (experimental and control group). Compared with 4 groups with four combinations, two-group experiment to some extent helped the researcher avoid other administrative factors which may affect the reliability of research results.
(2) Data analysis:
Using SPSS 10.0, two way analysis of covariance (ANCOVA) was used to test the null hypotheses. Pretest was entered as a covariate to control for variability in initial ESL skills level while the posttest was entered as the dependent variable.[pg.434] Besides, the in-depth data analysis was discussed with the hypotheses one by one, so it made the data and explanation more understandable. Furthermore, the statistics of the overall, sub-sections and interaction effects as a whole was analyzed in the factorial experiment, which could increase the understanding of the phenomena being investigated; otherwise, the conclusion may oversimplify the actual situation.
(3) Reliable measurement:
Firstly, GEFT was tested and validated to be applicable to 6th grade students so the classification of cognitive learning styles is reliable; secondly, pretest was developed by course instructor and validated by a jury of university professors, supervisors of English and experienced teachers and the reliability was established by carrying out a pilot study. Meanwhile, posttest was identical with pretest in the form.[pg.432-433] The testing results were therefore correspondingly reliable. Besides, the same instructor with both groups could mirror the same procedure, which made the research results more comparable.
Although the above highlights to some extent ensure the quality of this study, caution still need to be exercised when conducting this study. Apart from some potential measure validity problems listed at the end of the sheet, there are two main shortcomings as follows:
(1) Unrepresentative sample:
Ninety 6th grade students were selected from a private school and all participants were local males. Also, they were non-randomly assigned to experimental (n=46) and control (n=44) groups. So the factors of gender (male), the context of school (private school) and the different number of students (46 versus 44) in two groups may affect the validity and further generalization of the study outcomes.
(2) Vague description of IMM CD-ROM and research procedure:
There was no detailed introduction to IMM CD-ROM such as its interface, how to use it and other characteristics. Besides, the procedure was not clearly described. For example, it was pointed out that participants in the experimental group were asked to enhance their learning by doing some selected homework using CD-ROM at home; however, it did not mention the homework for control group. Moreover, no pretest and posttest question sheets were attached for better understanding. Therefore, the description of materials, measurement and procedure was not sufficient enough for future replication.
(3) Weak conclusion:
After data analysis and discussion, the researcher only presented the study results in the conclusion part without in-depth analysis of the findings and suggestions for practical use. Although the recommendations provided by the researcher may benefit future research, the weak conclusion could not highlight the purpose and significance of the study.
Sunday, March 15, 2009
Here's a link to my discussion re: the article:
Sarah Parsons, Anne Leonard, Peter Mitchell
Virtual environments for social skills training: comments from two adolescents with autistic spectrum disorder (2006)
Computers & Education, Volume 47, Issue 2 , 186-206
The relationship between social networking and academic performance: a correlative quantitative study
Citing the importance of social networking on academic performance and professional success, Hwang, Kessler and Francesco (2004) investigate the relationships between social networking and academic performance in both Asian and North American business students. All of the authors are affiliated with university business schools, and the end of the article reviewed here notes the professional affiliations of each author, providing clarity on any institutional biases that may exist. Both the authors and subjects are from three business schools, indicating that while these findings may apply outside the context of an academic business research setting, they may not be representative of or generalize to the wider population of students outside this context. The authors note this limitation in their discussion section, and did not appear to be biased toward their subjects or to a particular outcome.
In keeping with the business-centric focus of the study, the literature review in the introduction section cites other management journals, most of which are rated by the ISI Web of Science, Journal Citation Reports as having impact factors in the mid to high range, indicating a higher degree of influence and citation, relative to the journals in their category. The literature review is used to establish the claim that there is a relationship between student networking behavior and academic performance, networking strategies and styles are culturally influenced, and that social networking contributes to success.
All of the articles cited in the literature review are peer reviewed, based on descriptions of their editorial boards. Every article and chapter citation was also relevant to the study, contributing to its relevance or validity, building up a clear case for engaging in this line of research. Each citation lends support to the following hypotheses:
- Definition of self varies, with emphasis on independence and personal aspects for individualists versus interdependence and group aspects for collectivists.
- Goal priority varies such that personal goals are more important for individualists, whereas group goals take precedence for collectivists.
- Determinants for social behavior vary such that individualistic behavior is dominated by self-focused attitudes, personal rights, and contracts, whereas collectivistic behavior is guided by norms, obligations, and duties.
- The nature of relationships varies such that individualists rationally consider the exchange, whereas collectivists emphasize the communality of the relationship, even when this represents a disadvantage.-- Hwang Kessler and Francesco (2004)
Both the questions and the hypotheses are clearly stated in the article and labeled as such. The introduction itself ends with the following questions:
- How does culture predict learning-oriented networking behaviors?
- How do these learning-oriented networking behaviors impact performance?-- Hwang Kessler and Francesco (2004)
Each variable considered in the study is defined in the methods section by how participants respond to various questionnaires. First, eastern and western cultures are classified as more collectivist or individualist respectively, relative to the other. Participants in the study completed a questionnaire assessing just how individual or collectivist (referred to by the authors as IC orientation) they are. The questionnaire assessed five factors: (1) “Stand Alone” in which respondents value independence, (2) “Win above All” corresponding to placing a value on winning in a competitive situation, (3) “Individual Thinking” corresponding to the degree to which an individual performs according to their own, versus group norms, (4) “Sacrifice” in which individuals forfeit their own needs to the needs of the group and (5) “Group Preference” which signals a preference for working in groups rather than alone. High scores in the first three factors are considered individualistic, while high scores in the last two are considered collectivist. Additionally, two types of networking behavior are examined: (1) “Vertical,” in which social contact is sought from professors, bosses or others considered higher in influence within the social network and (2) “Horizontal,” in which social contact is sought from peers within the social network. Both the I-C and Networking factors are clearly displayed in a table identifying which questions in the questionnaire relate to each, and what the factor loadings Each question is provided in a table, complete with factor loadings, providing a measure of correlation between the factor being investigated and the actual scores given in the questionnaire.
The sample itself is fairly large, with 253 participants in the US, 266 from Hong Kong and 131 from Singapore, for a total of 650 participants. If we consider the population to be undergraduate business students in the US, Hong Kong and Singapore, then the sample is representative. However, as the authors state, this sample may not be representative of the larger population of all students or all people, suggesting the need for further research to investigate this in other contexts. Almost two thirds of the participants were female, and most were around the mean age of 20.8 years.
The sub-groups of eastern and western students were valuable in exploring differences between these cultures, which were found when examining differences in scores for the two networking orientations, with eastern participants engaging in more horizontal networking behavior and western participants engaging in more vertical networking. This suggests that more collectivist cultures engage in more peer networking while individualist cultures engage in more ‘ladder climbing’ networking. Interestingly, horizontal networking can be thought of as directionless, in that it applies to any peer, while vertical networking tends to be more directional, with an orientation toward those of higher social status than the seeker.
Variables were operationalized as responses to questions, and these questions identified I-C and networking orientation reliably. Reliability measures are presented with a Cronbach alpha coefficient between 0.77 and 0.84. The reliability of horizontal networking behavior measures are .92 and vertical measures are .89, thus all measures were high enough to be considered reliable instruments. The assessment of academic performance was normalized for each country and was based on the four highest final grades in each participant’s previous semester. The resulting normalized grade measure had a reliability of .92, creating a reliable instrument for the evaluation of academic performance.
Because the entire procedure involves simply issuing a well-defined questionnaire and gathering grades from the previous semester, this study would be very straight-forward to replicate. Every question is identified, every construct is explained and the method itself is clearly described. This was not an experiment, so there was no control group, no random assignment and no pre-post procedures.
The authors do not describe the format of the questionnaire (electronic, oral or paper), nor do they describe the conditions under which the questionnaire was carried out, but I do not believe that the format or the conditions of the questionnaire would have a large impact on the responses given. Because of the relatively large sample sizes, and the diverse cultures in which they were administered, any influences of the test environment or prior exposure would be minimized.
It appears that the appropriate statistics were used to determine reliability and factor loadings for each variable. Analysis of variance was used to determine post hoc differences in networking behavior by country, and standard error was used to determine post hoc networking means within each country within standard confidence intervals. Interestingly, a comparison between horizontal and vertical networking within each country showed significant differences in Hong Kong and Singapore and no difference in the US.
The discussion notes the surprising result that the “Stand Alone” variable was the only reliable predictor of both social networking behaviors. Those scoring high on the “Stand Alone” scale also seek out others – both horizontally and vertically – for information, which sounds counter-intuitive to the way this variable was operationalized. While the researchers show a relationship between networking and academic performance, they make the fundamental error of seeing causation in correlation. In other words, while it seems to be the case intuitively, networking may not lead to higher academic performance. Students who perform better academically may be better at networking because they are more confident, more socially intelligent or more reliant on others.
This study suggests that experimental research needs to be carried out in order to uncover actual causation. Researchers could, for example, have four groups of students – one that engages in frequent vertical networking (with tutors or professors during office hours) but not horizontal, one that engages in frequent horizontal networking (in peer study groups) but not vertical, one that engages in both types of networking and one that does not engage in any networking. By looking at past networking behavior, changes resulting from the intervention and tracking academic performance over time, these issues may be resolved.
The authors provide explanations for the results, stating that the contradictory relationship between “Stand Alone” and networking may be related to “Stand Alone” students being self-reliant in the way that they actively seek out information from others. Unfortunately, the survey did not include questions that might reveal an orientation toward active, self-directed behavior and passive group following. Seen in this light, networking takes on a dimension that is missing from this study. Assuming that there is a causative relationship between networking and academic performance, the authors recommend ways in which networking might be facilitated in academic contexts.
The limitations of the study, particularly its lack of generalizability are discussed and recommendations for future research are offered, but not those that would seek to expose an actual causative relationship.
Hwang, A., Kessler, E., & Francesco, A. (2004). Student Networking Behavior, Culture, and Grade Performance: An Empirical Study and Pedagogical Recommendations. Academy of Management Learning and Education, 3(2), 139-150.
Saturday, March 14, 2009
On pg.526 "How do you Evaluate Narrative Research," the author mentioned some questions may be used to evaluate the narrative report. One of the questions is: "is there evidence that the researcher collaborated with the participant?"
My question is: how to demonstrate the evidence that the researcher collaborated with the participant?
All I can think of that can be the evidence of the collaboration is :
- the researcher directly mentions the collaboration in methodology section
- the combination of participant's quotes and researcher's interpretation in the section of result or discussion
QUANTITATIVE STUDY CRITIQUE OF: AFFECTS OF AUDIO TEXT ON THE ACQUISITION OF SECONDARY-LEVEL CONTENT BY STUDENTS WITH MILD DISABILITIES
Boyle et al.’s (2003) study investigates the efficacy of audio texts and the combination of audio texts with structured instruction in a population of learning disabled students.
Of the six researchers involved in this study, three are from the Recording For the Blind and Dyslexic (RFB&D) institute. The remaining three researchers are from John Hopkins University. Although the RFB&D is a non-profit society, positive outcomes from this research study promote the efficacy of audio recordings provided by the RB & D institute. The (RFB&D, 2007) website states:
"Students who had access to RFB&D's AudioPlus textbooks achieved a 38.1% increase in post-test scores compared to peers in the control group, whose scores increased by 21%"
Interestingly, the information displayed (in percentages) is a reinterpretation of this published study. Clearly, the RFB&D institution benefits from and continues to amplify the positive findings of the study.
The researchers believe that an intervention (their intervention) is needed. The researchers state “alternative [non print] instructional methods are needed to convey content information effectively and efficiently” (p. 204). In addition, “strategy instruction is viewed as one of the key components to increasing reading comprehension” (p. 204). The researchers show pre-study bias in stating the “greater efficiency” of CDs over books on tape. Although this intuitively makes sense, the researchers provide no testing or research to back the claim that “CDs enable students and teachers to work with greater efficiency” (p. 204).
The research questions in this study are obfuscated as research “purpose[s]”. So that the purpose was to investigate the effect of: (a) audio text and, (b) listening strategies, on student performance and comprehension (p. 205). In the Results section (p. 211), the research question is clarified and stated as “whether the use of an audio textbook with and without a strategy enhance content acquisition for high school students with mild cognitive defects … compared to students not provided with assistive devices”. The hypotheses is concealed as a premise thus: “The study was premised specifically upon the need for students with LD and other mild cognitive disabilities to be skilled in actively processing textural information in a way that facilitates understanding and remembering.” (p. 205). The outcome that the researchers were hoping to find was that LD students using audio text and note taking strategies would be better able to create clear and complete notes (p. 210). Unfortunately, it is not until the Discussion section (p. 212) that the hypothesis is plainly stated: “It was hypothesized that the SLiCK [Set it Up, Look Ahead, Comprehend, Keep it Together] strategy would increase the effectiveness of the audio text book by providing both learning and organizational strategies”.
From the first paragraph, the researchers document the gap between curriculum and the needs of LD students. Throughout subsequent paragraphs, the researchers make a convincing argument for how to augment LD student literacy. However, the premise that CDs are more convenient that audio tapes (for LD student instruction) is not adequately explored.
The researchers describe this research design as “experimental design using a “pre-test post-test design” (p. 211). The methodology uses random assignment of participants to groups. The methodology included two experimental conditions and one control condition. One group received audio only treatment, one group revived both audio and SLiCK instruction, and the control group did not receive treatment.
In this study, the dependant variable is student acquisition of academic content. The Methods section of this study describes the details for delivery of instruction and experimental procedures. Two types of measures were used to assess outcomes: (1) cumulative content acquisition tests and (2) short-term quizzes (p. 205).
The research procedures (strategy) in this experiment are covered in five paragraphs. The four components of the SLiCK strategy include setup, familiarization, comprehension, and synthesis (p. 211). The SLiCK strategy for treatment groups is easily replicated. For the audio portion, the procedure for explaining how to operate and navigate CDs menus is described (p. 210). The control group procedures are also easily reproduced. The control group had access to regular teacher-instruction support and did not have access to audio nor the SLiCK instruction (p. 208).
The researchers were surprised to find no significant difference in scores between the experimental groups (audio only treatment compared to audio and SLiCK treatment). The study of results (significantly higher scores for both experimental groups) support the researchers conclusions (experimental groups were able to achieve higher quiz and cumulative test scores). The researchers note that their study contrasts with Torgesen, Dahlem, & Greenstein (1987) findings on LD research using audio and organizational strategies. Torgesen et al. (1987) found LD students required audio as well as advanced organizers to increase content learning over time.
The authors suggest that further research with audio text are required to ensure audio text are effective. In addition, authors encourage the development and testing of further complimentary strategies for enhancing the effectiveness of audio based assistive devices (p. 213). The researchers draw a reasonable conclusion that curricula for LD students can be enhanced with audio tests to improve content acquisition. The researchers also conclude that the use of audio with advanced organizers for LD students needs further study.
The researchers acknowledge the limitations of this study to be generalized to any population. Worse, the study’s unique mix of participants and personalogical variables (including age as well as ability) therefore excludes generalization to LD school settings with different mixes of participants.
The researchers explicitly describe the experimental procedures (in seven paragraphs). The research methodology appears to be easily repeatable. Although the study has randomized assignment of participants, perhaps person variables may be responsible for the unexpected result of the audio and SLiCK group not doing better than the audio only group. Multiple-treatment interference (multiple instruction techniques) is minimized by having consistent instructional delivery. Observers monitored (through in class observation and checklists) instructional practice adherence. Because the participants in the control and both treatment groups are aware of being studied, the Hawthorne effect (participant improvement based on the awareness of being studied) is somewhat mitigated. However, the treatment groups receive extra attention (instruction in note taking and how to use the CDs). Perhaps a greater Hawthorne effect is at work for the treatment groups. Another potential threat to ecological validity shows up when the treatment groups are exposed to novelty and disruption effects present when using audio CDs and the SLiCK program. The researchers minimize experimenter bias is by having the experiment (instruction) delivered by teachers rather than researchers. If the researchers had not excluded results of non pre-test participants, sensitization to pre-test effect could have been examined. The same might also hold true for post-test sensitization. Too bad the researchers chose to exclude (or include as additional factors) participant data for missed pre-test and post-test. I wonder if the role of history as an interaction might also have a treatment effect generalizable to the 2003Audio CDs era. CDs have become passé, iPods, iTunes and MP3 players are the current audio fad. This study dutifully measures knowledge acquisition scores (the dependant variable). The discussion section anecdotally describes the improvement of note taking skills and transfer to using note taking skills in another domain (science class). If the dependant variable had been the acquisition of note taking skills and transfer of note taking skills to other domains, perhaps this study would have had greater impact and generalizability. Such was not the case.
Boyle, E. A., Rosenberg, M. S., Connelly, V. J., Washburn, S. G., Brinckerhoff, L. C., & Banerjee, M. (2003). Effects of audio texts on the acquisition of secondary-level content by students with mild disabilities. Learning Disability Quarterly, 26(3), 203-214. Retrieved from http://www.jstor.org/stable/1593652
RFB&D. (2007). Press release - RFB&D audio textbooks boost comprehension scores for students with learning disabilities. Retrieved February 7, 2009, from http://www.rfbd.org/mediapr23.htm
Torgesen, J. K., Dahlem, W. E., & Greenstein, J. (1987). Using verbatim text recordings to enhance reading comprehension in learning disabled adolescents. Learning Disabilities Focus, 3(1), 30-38.