Wednesday, March 25, 2009

Analysis of “Peer Interaction and Critical Thinking: Face-to-face or Online Discussion?” by Jane Guiller, Alan Durndell and Anne Ross

I'm going to have a crack at copying Pam's idea here. Hopefully it works.

In their study “Peer Interaction and Critical Thinking: Face-to-Face or Online Discussion?”, researchers Jane Guiller, Alan Durndell and Anne Ross compared live discussions with online discussions and how the two modes of communication influence critical thinking skills and how they might be used to augment them. A 21-point scale was used to quantitatively measure the level or depth of critical thinking in those discussions within the framework of a repeated-measures design. The researchers concluded that both modes of communication should be integrated to provide optimum conditions for advancing critical thinking as it was found that while online discussions afforded depth of critical thinking unobserved in the live discussions, there was more “brainstorming” and collaborative learning and thinking happening during the face-to-face discussions.

A clear and immediately evident strength of the current study is the concrete definition of terms central to the research. The key terms “collaborative learning” and “critical thinking” are defined for the purposes of the research almost immediately into the introduction. Prior studies’ definitions of critical thinking that more or less suggest it is a variety of skills that involves developing reasoned argumentation in a social context are referenced giving credence to that of the current study. The introduction also lays the foundation for the study regarding the collaborative aspect of learning through discussion and cites ample previous studies to support both the terminology and method of quantitative measurement used.

Although both the researchers and subjects of the study have psychology backgrounds, there is no discernable bias toward or against the institutional body of thought nor is there one to be found betraying their connection to Glasgow Caledonian University where the work was done. They did indicate a belief in the benefits of both online asynchronous discussions and live discussions in developing critical thinking skills. In addition to the subject of the study being two modes of collaborative learning and teaching, citations of Vygotsky (P. 188) and discussion of guided practice in the context of “scaffolding” shortly thereafter suggest a social constructivist theoretical orientation. There is no justifiable reason to believe, however, that these biases toward a constructivist theoretical framework and a belief in the benefits of online and live discussions to critical thinking skills development lend themselves to a preferred outcome regarding the two modes of communication. From that, it is reasonable to conclude that the reliability and validity of the research is not adversely affected by those same biases.

Though not included in the introduction, the hypotheses merited its own sub-heading under a heading called “The Present Study”. All three hypotheses are clearly delineated and quantifiably measurable using the 21-point criteria originated by Anderson et al. in 2001 and outlined by the authors in the attached appendix. (P. 189) The researches in this study opted to measure the discussions using the Anderson methodology as it seeks to reveal critical thinking indicators which is precisely what the researchers have set out to do. Using a more precise tool to measure their results adds significant validity to the experiment.

The authors of the study successfully advocated the case for their study. After reading the introduction, one has a very clear understanding of the problem, the necessity of the current research, the definition of salient terms, and the means by which the current research will achieve the goal of further understanding by filling gaps and addressing inconsistencies in prior research including what is to be measured and how.

The independent and dependent variables are explicitly, clearly and operationally defined. The independent variable was the two modes of the discussion: face-to-face (condition 1) and online (condition 2). To assess the two groups, the current study employed a repeated-measures design which was an appropriate design choice for the research. By employing this design, the researchers were able to avoid the threats to internal validity that plague similar research. Problems such as regression, mortality, and maturation, which arise when comparing groups, were eliminated via the design (Creswell, 2008). By counterbalancing the procedure conditions (face-to-face groups met twice to allow for further thought on their subject as would be afforded by the asynchronous online discussion), a great degree of internal validity was preserved. Participants were made privy to the purpose of the design and the focus was on critical thinking more than the mode of discourse thus leaving little room for compensatory rivalry, compensatory equalization, or resentful demoralization among the participants.

Experimental procedures were somewhat ambiguous on the face-to-face discussion side. What were the “rules” of the live discussions? What was the structure? Did instructors participate? Although the material and subject to be discussed and the criteria the students would use to evaluate it was given, the specifics of the actual structure of the forum for the face-to-face discussion was not addressed. How formal was the discussion group? Was it chaired? These are valid questions, as the structure of the face-to-face discussion would most likely affect the level of engagement of the participants, which would in turn affect the results of the study. The online condition was briefly outlined but similar questions regarding instructor participation, structure, and formality exist there as well. This might well present an opportunity for further study as more specific control over these conditions might produce better or more accurate and valid results. Outside of the questions outlined above, the procedure for the study was clearly illustrated and, aside from issues arising from those same questions, the study would be more or less simply replicated.

The study concludes with suggestions for further research and implications for further research. They explicitly state that, “further research is required in order to investigate the extent to which these results extrapolate to other collaborative critical thinking activities and across disciplines.” (P. 198) Another suggested avenue of research is how critical thinking development through such blended learning tasks like online or live discussions may transfer to other tasks requiring similar skills. Regarding the implications for daily practice, the study concludes that a “combination of both face-to-face and online discussion seems to be most beneficial to students.” (198)

Teachers in classrooms could model their lesson after the lessons learned in this study. Based on what Guiller, Durndell, and Ross found, teachers could structure their class discussions so that there is a live, face-to-face initial discussion with online discussion as a follow-up. This practice would be informed by this study. Additionally, the 21-point assessment tool could easily be adapted for use in the classroom to help advance the students’ critical thinking skills.

1 comment:

  1. This article is very helpful for my research design which studies the effectiveness of integrating IRC and e-mail into the course of Business English Writing with regard to writing proficiency of Chinese university students. I also get synchronous discusstion and asynchronous interaction involved in the teaching compared with face-to-face instruction.