Sunday, March 22, 2009

Qualitative Critique: Language Experiences in Web-based Language Learning

http://web.ebscohost.com.proxy.lib.sfu.ca/ehost/detail?vid=1&hid=3&sid=09451e65-a45e-4917-8ca6-3ba420da898a%40SRCSM1&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d

Introduction
The qualitiative paper I analyzed is from an investigation carried out by Jeong-Bae Son from the University of Southern Queensland. This is study titled the ‘Learner Experiences in Web-based Language Learning (WBLL)’ (Son, 2007) is published in the Computer Assisted Language Learning Journal. This research study explores the language learning experiences of English as a second language (ESL) learners through the use of Web-based programs. For this post I have focused on the data collection methods carried out by Son.

Multiple Data Sources & Triangulation
Son used ‘multiple data sources and triangulation of data collection methods to develop a rich description and discussion of learner experiences in WBLL (p. 22).’ For instance, a pre-questionnaire was used to gather background information on the subjects such as their age, gender, and previous experiences with computers. However, the pre-questionnaire results were not presented to the reader (p. 23). These results would have given a better overall sense of the learners’ comfort with language learning, computers, and Internet navigation to the readers. A final post-questionnaire was administered to the learners after the last WBLL session. This questionnaire consisted of 11 closed-questions and five open-ended questions. It is important to note that Son provided only positively phrased questions may have skewed the results as students may have answered ‘strongly agree’ or ‘agree’ to all the questions just to finish the survey faster (p. 29). It is important for researchers to include questions that are phrased negatively to encourage respondents to read the questions (Creswell, 2008). For instance, instead of ‘I was comfortable using the web during the web activities,’ the question could have been restated as, ‘I was not comfortable using the web during the web activities (p. 24).’

Another source of data collection was the observation forms. The first form consisted of eight WBLL sessions being videotaped for future playback and analysis. The second form was in real-time where a research assistant recorded on-task and off-task behavior both during online and off-line activities. This allowed the researcher to document the students’ usage of time when completing tasks (p. 24).

The last form of data collection was through interviews conducted one week after the WBLL sessions. Each participant had an interview with the classroom teacher. ‘The purpose of the interviews was to cross-check students’ responses to their post-questionnaire and to seek more information which was not possible in Section 1 of the post-questionnaire (p. 24).’ This additional interview step allowed for the evidence to be triangulated through the different forms of data – observational field notes and interviews. Evidence was further corroborated through different individuals such as the research assistant (real-time observations), and the students (pre- and post-questionnaires) (Creswell, 2008, p. 266).

Member checking to represent the emic perspective, the view from the inside of the culture, (Creswell) was not observed in this study. It is important for researchers to have the subjects review the statements made in the report to ensure accuracy and completeness (Gall, 2003). The process of data analysis is iterative as the researcher attempts to ask the subjects the same types of questions through various forms such as questionnaires and interviews, as well as through observational forms. When analyzing Table 2 (p. 29), there did not appear to be any outliers in the data. For instance, none of the students responded ‘strongly disagree’ only questions five and eight had three or more students responding ‘disagree’ but this observation was not clearly explained in the analysis. A better analysis was performed on the open-ended questions where a brief explanation was provided for why students may have answered negatively to a particular question (p. 30). Thus there was one unexpected or discrepant finding that the author needed to explain. This was in relation to the ‘one-third of the students [who] disagreed that their experience in WBLL made their language course more interesting (p. 33).’ Son explained this by suggesting that the course content and web materials should be tailored to the students’ needs to ensure students are comfortable and confidence to proceed with the activities. Also, he pointed out the students’ computer skills need to be considered when expecting a task to be completed. The findings of a study are strengthened when an author is able to discuss outliers.

Moreover, the study has not carried out any long-term observations on WBLL with the ESL students, other than the one-week follow-up interview after the last WBLL session. The author does mention that ‘it was difficult to measure actual learning outcomes in a statistically meaningful way over such a short period of time (p. 34).’ Furthermore, a representativeness check is not done, this check would have determined if findings are typical of the situation (Gall, 2003).

3 comments:

  1. Hi Pam,
    I like your use of color and bold text. I found this useful in focusing my attention. Way to go on condensing your qualitative critique into and easily readable blog post.

    Frank

    ReplyDelete
  2. I was going to say the same thing as Frank, so this is a redundant post. I like the red; I'm going to suggest that students who use a blog in future classes also do this...in fact, I'm going to include it in my recent revision of my online undergrad course in LD. I'll bet that you are wishing I used red in this blog post that seems to ramble on forever. Good critique!

    ReplyDelete
  3. Hi Pam,

    Your comment on ensuring questions were phrased to ensure you are measuring what you think you are measuring got me thinking. I recall a few years back a company was doing personality surveys for Walmart. Walmart had identified all of the personality traits they thought were necessary for a Walmart greeter, so anyone applying for the position had to take this personality test. What was interesting is that all of the characteristics were written in a positive light, so that even traits Walmart did not want would seem like good qualities. This makes it much harder to trick the test and select the characterisitics you think Walmart is looking for.

    ReplyDelete