Models to evaluate online learning communities of asynchronous discussion forums

来源:百度文库 编辑:神马文学网 时间:2024/04/20 21:10:43
Australian Journal of Educational Technology
2003, 19(2), 241-259.AJET 19
Models to evaluate online learning communities of asynchronous discussion forums
Khe Foon Hew and Wing Sum Cheung
Nanyang Technological University, SingaporeRecent developments in learning theory have emphasised the importance of context and social interaction. In this vein, the notion of a learning community is gaining momentum. With the advent of asynchronous online discussion forums, learning communities now need not be confined to any specific geographical locations, as people can now interact with one another at any place and time convenient to them. In this paper, we describe appropriate models that can evaluate these online learning communities. We examine pertinent issues including learner-learner interaction, learner-teacher interaction, the thinking skills of the learners, the levels of information processing exhibited by learners in the online discussion, and the roles played by the online moderator. A practical example is also provided to illustrate how these models can be used. Finally, we discuss some drawbacks related to each model and ways for overcoming them.
Introduction
Traditionally, the education of young people involved primarily the transmission of a fixed knowledge base (Roehler & Cantlon, 1997). Under this educational paradigm, the teacher‘s task was to provide learners with knowledge, while the learners‘ goal was to learn by individually digesting and organising the information received. It is then presumed that learning has taken place when the learners have acquired in their heads the knowledge presented by the teacher (Hsu, Chen & Hung, 2000). In recent years, however, many educators increasingly emphasise the social nature of learning, favouring learning environments that situate learners in authentic contexts (Barab & Duffy, 2000; Collins, Brown, & Newman, 1989). In this vein, the notion of a learning community is gaining momentum (Hung & Chen, 2000).
Nonetheless, providing learners the opportunities to engage in authentic learning experiences is a challenge in most traditional learning environments (Bielaczyz & Collins, 1999). Squire and Johnson (2000) argued that this is due to the fact that learning communities tend to be "distributed across time and space, making them mostly inaccessible to the educator located in a traditional classroom environment" (p. 23). One of the ways to bridge this gap of time and space is by using an asynchronous online discussion forum. These forums allows members of a learning community to interact easily with one another, at any place and time convenient to them. It can also promote student centered learning (Harasim, 1989), and some critical thinking processes such as reasoning and evaluation (Newman, Webb & Cochrane, 1997).
Although the use of asynchronous discussion forums can afford online learning communities with unprecedented learning opportunities, educators are often faced with difficulties in how to evaluate such online communities. As Gunawardena, Carabajal and Lowe (2001) noted,
The development of appropriate methodologies for evaluating the myriad, ever changing forms of online learning presents a critical challenge to distance educators. The open ended nature of online learning, the multiple threads of conversation, and fluid of participation patterns call for new ways of looking at evaluation. (p.3) This paper aims to help educators in the evaluation of online learning communities. We refer to an online learning community as a group of people who participate in an asynchronous online discussion forum with a common objective or interest, in order to learn from one another. We find it useful to adopt activity theory (Jonassen, 2002) as a guide to first help us identify the various evaluation issues that an educator might face when dealing with an online learning community. Once the evaluation issues have been identified, the appropriate models that can examine these issues can be delineated.
In the following sections, we describe activity theory, followed by a discussion on evaluation issues that an educator might face when dealing with an online learning community. Subsequently, we discuss appropriate models for addressing each of the mentioned evaluation issues.
Activity theory
Activity theory is a philosophy and cross-disciplinary framework for examining various forms of human practices, factoring in the processes of context, as developmental processes both at the individual and social levels at the same time (Kuutti, 1996), and without reductionist simplifications. An activity is done by people motivated towards a goal (or object) and mediated by tools and the community (Pang & Hung, 2001). It is the transformation of the goal (or object) into an outcome that motivates the execution of an activity (Hung & Wong, 2000). See Figure 1.

Figure 1: Processes within an activity
(adapted from Cole & Engeström, 1993)
From Figure 1, tools can be perceived as mediating the processes between the subject and object; rules mediate the processes between the subject and the community; while roles mediate the processes between the community and object. Pang & Hung (2001) wrote:
In other words, tools are used by subjects to achieve an object; there need to be rules set up between subjects and the other members in the community in order to achieve the goals; and between members of the community, there needs to be a division of labor in order to achieve the object. (p. 36) In an asynchronous discussion forum, the subject will be a member of an online learning community comprising peer learners, educators or external subject matter experts. The member makes use of tools (e.g. an asynchronous discussion forum such as Blackboard or Knowledge Forum) to exchange ideas and insights with other online members in learning some knowledge or skills (object). While acquiring the knowledge or skills, there are certain rules to be adhered to by the group of learners (such as the use of non-offensive messages) and roles played by various members of the online community (such as the discussion moderator) in order to create a meaningful and memorable online learning experience.
Activity theory provides educators a practical and holistic approach to the evaluation of an online learning community. By considering the various triads of nodes taken from Figure 1, we can form a possible structure for analysis. Due to space constraint, only two of these triads are considered in this paper.

Figure 2: Subject-community-object triad
The triad of subject-community-object describes how the participant of an asynchronous online discussion forum and the surrounding learning community collaborate to act on the object. Looking at this triad, an educator may want to address some possible relevant evaluation questions such as the following:
How do the learners and teachers interact with one another in the online learning community?
What types of thinking skills do the learners exhibit during their online discussion?
What levels of information processing do the learners exhibit in their message postings?
We grouped the aforementioned questions into two categories: interaction and cognitive processes of online learners. Models to evaluate these issues will be presented in the following section.
Subject-community-roles
The next triad is subject-community-roles, examining the roles played by the members of an online learning community in relation to the object of the activity. In an asynchronous online discussion forum, the role of the moderator is widely acknowledged as an important factor that may affect the success of the discussion (Ahern, Peck & Laycock, 1992). Typically, the roles of an online moderator can be classified into three different types: organisational, social or intellectual (Paulsen, 1995). Organisational roles include activities such as explaining the requirements and procedures of the online discussion, and spurring the online participation when it is lagging. Social roles, on the other hand, involve making participants comfortable in an online environment and valuing their contributions. Intellectual roles include bringing up issues that participants have missed, highlighting and pursuing further the important ones. In this paper, we will present an appropriate model that can evaluate the intellectual roles of the online moderator.

Figure 3: Subject community roles
Evaluation of an online learning community
Interactions in the online learning community
Interactions in an online learning community may consist of learner-teacher and learner-learner interactions (Moore, 1989). In this paper, we propose models that can examine issues such as: The extent to which the learners and teacher are commenting and responding to each other‘s message;
The extent of the construction of knowledge between the learners and the teacher and with other learners; and
The extent of social presence in the learning community.
Each of these issues will be discussed in turn.Learner-learner and learner-teacher interactions
Henri‘s (1992) model, which contains an interactivity framework that allows for the analysis of the nature of interaction among contributors, is chosen to address this issue because it enables the educator to analyse the relationship among the message postings (Gunawardena, Lowe & Anderson, 1997). The relationship among the message postings reveals the extent to which the learners and teacher are responding to one another. Henri‘s (1992) interactivity framework differentiates between online messages that are explicit, implicit or independent. Explicit and implicit interactions are represented as a three step process where i) interlocutor A writes to B; ii) B responds to A; followed by iii) A‘s comment to B.
According to Henri (1992), explicit interactions are messages that are either in response to a question posed, or a commentary on someone else‘s message. In explicit interactions, the person to whom the communication is directed is indicated in the message. An example of an explicit interaction type of message is:
Hi Susan! I agree with you and Uma that having 2 teachers in the computer lab is ideal, other than the use of colour cups as mentioned by James, to indicate to the teacher that a student needs help. Implicit interactions, on the other hand, are messages that include a response to or commentary on a prior message, but without indicating specifically to which message the contribution referred. Finally, the independent statements are messages that contain new ideas, not connected to others that have been previously expressed in the online discussion. By differentiating between explicit, implicit and independent online messages, an educator can thus observe the relationship or patterns of communication between participants. These patterns, however, offer little insight into the contribution individual messages make to the emerging totality of constructed knowledge (Gunawardena et al., 2001). This leads us to the next issue.Knowledge construction among online learners
To evaluate the extent of knowledge construction between the learners and the teacher or with other learners, educators might want to consider Gunawardena‘s et al (1997) model as one possible scheme. Gunawardena et al (1997) theorised that the active construction of knowledge progresses through five phases, and that although every instance of socially constructed knowledge may not move linearly through each successive phase, they are nonetheless consistent with much of the literature related to constructivist knowledge creation (Kanuka & Anderson, 1998). The five phases can be described as shown in Table 1.
Even as learners interact and construct knowledge with one another using asynchronous online discussion forums, one area of concern for educators is the high forum dropout rate due to the physical separation of learners (Rovai, 2002). Tinto (1993) emphasised the importance of community in reducing dropouts when he theorised that learners would increase their levels of satisfaction and the likelihood of persisting in the discussion if they feel involved in the learning community and develop relationships with other learners. Accordingly, the next evaluation issue explores the social presence in an online learning community.
Table 1: Five phases in the active construction of knowledge
(after Kanuka & Anderson, 1998)
Phase I Sharing and comparing of information. For example: Statements of agreement or corroborating examples from one or more other participant.
Phase II Discovery and exploration of dissonance or inconsistency among the ideas, or statements advanced by different participants. For example: Identifying and stating areas of disagreement or asking and answering questions to clarify the source and extent of the disagreements.
Phase III Negotiation of meaning. For example: Negotiation of the meaning of terms or identification of areas of agreement or overlap among conflicting concepts.
Phase IV   Testing and modification of proposed synthesis or co-construction. For example: Testing the proposed synthesis against formal data collected or against contradictory information from the literature.
Phase V Statement or application of newly constructed knowledge. For example: Summarising of agreements or students‘ self reflective statements that illustrate their knowledge or ways of thinking have changed as a result of the online interaction.
Social presence
According to Rourke, Anderson, Garrison and Archer (1999), social presence can be described as the ability of learners to project themselves socially and affectively into a community. The significance of social presence lies in its ability to support the cognitive and affective objectives of learning. Social presence supports the cognitive objectives through its ability to instigate and sustain critical thinking in a learning community (Rourke et al, 1999). Social presence also supports the affective objectives, by making group interactions appealing, engaging and thus intrinsically rewarding, leading to an increase in academic, social, and institutional integration and resulting in increased persistence and course completion (Tinto, 1993). Rourke‘s et al. (1999) model examines three constructs of social presence and affective responses (the expression of feelings and mood). For example, I really enjoy using this asynchronous online discussion stuff. It enables me to present my thoughts neatly - interactive responses (expressions that communicate mutual attention and awareness).
I agree with Titus on the need to add more graphics in order to make the design more interesting - cohesive responses (expressions that build and sustain a sense of group commitment).
It‘s great working with you guys using this form of communication - selected to evaluate the social presence of an online learning community.
Besides evaluating the interactions and social presence in an online learning community, educators can also assess the cognitive processes of their learners. This is discussed in the following section.
Cognitive processes of the learners
Two specific evaluation questions pertaining to the cognitive processes of online learners will be addressed in this paper:
What are the types of thinking skills exhibited by the online learners during the online discussion?
What are the levels of information processing found in the online learners‘ messages?
Henri‘s (1992) cognitive skills framework, which evaluates critical thinking, offered one possible model for exploring the above questions. According to Henri (1992), there are five types of critical thinking: Elementary clarification (passing on information without elaboration);
In depth clarification (analysis indicates insight and understanding of the nature of the problem);
Inference (evidence of inductive or deductive reasoning);
Judgment (expressing a judgment about an inference); and
Strategies (proposing a solution).
Each of these critical thinking skills can be further classified according to a dichotomy of surface versus in depth level of information processing scheme. In depth level of processing, for example, is indicated by messages that reflect organisation and critical evaluation of information, while surface level is indicated by the mere repetition of ideas and the absence of explanation and justification.
Aspects of Henri‘s (1992) critical thinking model have been taken up and expanded upon by others (e.g. Newman, Johnson, Webb, & Cochrane, 1997). Newman et al. (1997) developed ten paired indicators of critical versus uncritical thinking in their model. This list of paired opposites represents surface level of information processing (i.e. uncritical thinking) and in depth level of information processing (i.e. critical thinking). The ten indicators are: Relevance, Importance, Novelty (new information, ideas, solutions), Bringing outside knowledge/experience to bear on problem, Ambiguity and clarity, Linking ideas and interpretation, Justification, Critical assessment, Practical utility, and Width of understanding. Some examples of the different thinking skills and levels of information processing are provided below.
Critical thinking - surface level:
"I find that there are too many empty (white) spaces on the presentation slides." (This was classified as critical thinking - surface level of information processing since the author made his conclusion without giving any justification as to why it was not good to have too many empty spaces on a presentation slide)
Critical thinking - in depth level:
"I feel that the choice of your illustrations are quite well chosen, except for the birds. I feel that the birds are distracting because of their movements and they don‘t blend well with the other illustrations." (This was coded as critical thinking - in depth level of information processing because the author expressed a judgment and provided a plausible argument as to why his judgment was valid.)
Roles played by the online moderator or instructor of the online learning community
Kirkley, Savery, and Grabner-Hagen (1998) focused on the intellectual roles of the online moderator or instructor, by evaluating the different means of assistance to support learning that an online moderator and instructor can render to the learners. Seven means are described:
Scaffolding. e.g. guidance or comments given to help the learner master the materials and move to a higher level of understanding.
Feedback on performance. e.g. information positive or negative given by the moderator on specific acts or ideas.
Cognitive structuring. e.g. assistance given by the moderator to provide a structure for thinking that helps the online learner organise "raw" experience.
Modelling. e.g. when the moderator/instructor offers behaviour for imitation.
Contingency management. e.g. using praise or encouragement to reward desirable behaviours, or censure to control undesirable behaviours).
Instructing. e.g. giving explicit information on specific acts).
Questioning. e.g. using prompts to stimulate and provoke thinking by the learner.
Table 2 and Table 3 summarise the various models for evaluating the interactions, cognitive processes and intellectual roles of the moderator or instructor in an online learning community.
Table 2: Evaluation of an online learning community
Purpose of evaluation Evaluation model
To describe the nature of the learner-learner and learner-teacher interactions Henri (1992)
Rourke, Anderson, Garrison and Archer (1999)
Gunawardena, Lowe, & Anderson (1998)
To examine the cognitive processes Henri (1992)
Newman, Johnson, Webb & Cochrane (1997)
To analyse the moderator and learners‘ online roles Kirkley, Savery, & Grabner-Hagen (1998)
Table 3: Summarising some models for evaluation
Method Indicator
Henri‘s Interactivity dimension (1992)
Unit of analysis: thematic unit
This model distinguishes between interactive versus non-interactive and explicit versus implicit interaction. Explicit and implicit interactions are defined as a three step process: a) communication of information; b) a first response to this information; and c) a second answer relating to the first. Explicit interaction Direct response (statements responding to a question by name) Direct commentary (statements about someone else‘s message by name)
Implicit interaction Indirect response (statements that respond either to a question without referring to it by name) Indirect commentary (statements taking up a previously expressed idea, but without referring to the original message by name)
Independent statement (statements that are not connected to others that have been previously expressed in the online discussion)
Rourke, Anderson, Garrison and Archer (1999)
Unit of analysis: combination of thematic and syntactic units
This model assesses the social presence of online learning community. It distinguishes between three broad categories: Interactive (expressions that communicate mutual attention and awareness) which may include:
Posting messages using the reply feature
Referring explicitly to the contents of others‘ messages
Asking other learners questions
Cohesive (expressions that build and sustain a sense of group commitment) which may include:
Addressing participants by name
Addressing the group as we, us, our group
Affective (expressions that communicate emotion, mood) which may include:
Expressing feelings
Self disclosing using humour
Gunawardena, Lowe & Anderson (1997)
Unit of analysis: Whole message
This model evaluates the social construction of knowledge in online discussion forum. It distinguishes between five phases of knowledge construction: Phase I: Sharing/comparing of information which may include:
Statements of observation/opinion
Statement of agreements from one or more other participants
Corroborating examples provided by one or more participants
Definition. Description, or identification of a problem
Phase II: Discovery of dissonance which may include:
Identifying and stating areas of disagreement
Asking and answering questions to clarify the source and extent of disagreement
Phase III: Negotiation/Co-construction which may include:
Negotiation or clarification of the meaning of terms
Identification of areas of agreement or overlap among conflicting concepts
Proposal and negotiation of new statements embodying compromise
Phase IV: Testing tentative constructions which may include:
Testing the proposed synthesis against "received fact" as shared by the participants and/or their culture
Testing against personal experience
Testing against formal data collected
Testing against contradictory testimony in the literature
Phase V: Agreement statement/applications of newly constructed meaning which may include:
Summarisation of agreement
Applications of new knowledge
Metacognitive statements by the participants illustrating their understanding that their knowledge or ways of thinking (cognitive schema) have changed as a result of the online interaction
Henri‘s Cognitive dimension (1992)
Unit of analysis: thematic unit
This model evaluates critical thinking of online learners.
Critical thinking. There are five different types:
Elementary clarification - passing on information without elaboration
In depth clarification - analyse a problem, identify assumptions
Inference - concluding based on evidence from prior statements
Judgment - expressing a judgment about an inference
Strategies - proposes a solution, outlines what is needed to implement the solution
Each of the five types of critical thinking is classified according to the dichotomy of surface versus deep level information processing.
Surface level information processing - repeating a message without adding new information, statement without justification, or suggesting a solution without explanation.
In depth level information processing - bringing in new information, shows links, solutions proposed with analysis of possible consequences, evidence of justification.
Newman, Johnson, Webb & Cochrane (1997)
Unit of analysis: thematic unit
This model measures the level of critical thinking by expanding on Henri‘s (1992) model. It includes ten indicators: Relevance
Importance
Novelty
Bringing outside knowledge or experience
Justification
Critical assessment
Linking ideas or interpretation
Ambiguity and clarity
Practical utility
Width of understanding
Each of the aforementioned ten indicators has its own list of paired opposites, one an indicator of surface level processing, one of in depth processing. For example, "Irrelevant statements or diversions" versus "Relevant statements".
Kirkley, Savery & Grabner-Hagen (1998)
Unit of analysis: instructional content of each individual sentence
This model evaluates the different means of learning assistance that an online moderator may render to the learners. Scaffolding - refers to the help, guidance and comments given to help the learner master the materials and move to a higher level of understanding.
Feedback on performance - information (positive or negative) given by the moderator/instructor on specific acts or ideas.
Cognitive structuring - a means of assistance whereby the moderator/instructor provides a structure for thinking and acting that helps the online learner organise "raw" experience.
Modeling - occurs when the moderator/instructor offers behaviour for imitation.
Contingency management - used to reward behaviours through praise/encouragement, or control undesirable behaviours through punishment in the form of censure.
Instructing - occurs when the moderator/instructor gives explicit information on specific acts.
Questioning - used as a prompt, to stimulate thinking and provoke creations by the learner.
An example to illustrate how these models may be used
The following research study provides an example illustrating how the aforementioned models may be used. Research study: Asynchronous online discussion on hypermedia design principles
Thirty-eight students were enrolled in a hypermedia design and development course. In this particular module, students learned important concepts such as learner control and the use of media. At the end of the course, students designed and developed hypermedia projects that served as instructional materials to be used in actual classroom settings. Classes met on a weekly basis, yet throughout the duration of the course, two asynchronous online discussion sessions were held. These asynchronous online discussions, which lasted about four weeks each, were done using the discussion forums available in Blackboard, a Web based course management software. The overall objectives of the online discussions were: 1) to provide each student an opportunity to identify design problems of their classmates‘ projects and give suggestions to solve the problems; and 2) to provide students the opportunity to comment about their classmates‘ ideas and suggestions.
Before the commencement of the online discussion, the students were first briefed, in a face to face environment, on the task they were to do. The hard copies of the students‘ online postings were printed from Blackboard at the end of the discussion. The actual analysis of the postings would be carried out in two parts. In the first part, the online postings would be read and divided into the appropriate units of analysis (See the following section for a more in depth discussion of units of analysis). The second part involves the use of the models on the identified units of analysis.
The aforementioned models offer educators the means to evaluate a host of different issues pertaining to the learners‘ online discussion. Thus, depending on an educator‘s evaluation aims, the appropriate models can be chosen and utilised. For example, in order to evaluate the extent to which the students are responding to one another (i.e. learner-learner interaction), Henri‘s (1992) model would be used. Based on this model, all the identified units of analysis would be examined if they are explicit, implicit, or independent statements (Henri, 1992). To help better capture and show the pattern of connection among the units of analysis, a visual mapping of all the units can also be done. Explicit and implicit interactions would reveal to the educator whether the students are commenting and responding to each other‘s ideas. Independent statements, on the other hand, would reveal a minimal sense of real heated discussions or debate with the students taking sides on issues, negotiating or arriving at a compromise. Educators, armed with such knowledge, can thus take the necessary steps (e.g. giving encouragement) to promote interaction among the students.
Educators who wish to go beyond studying the mechanistic relationships among units of analysis, into evaluating the extent of knowledge construction among the learners, would find the model by Gunawardena et al. (1997) helpful. This is because this model reveals the stages each unit of analysis has attained in terms of constructivist knowledge creation. An example of an actual Phase I unit of analysis is given below.
I concur with Sharon that there were no buttons that allowed learners to move from one slide to another. Just a suggestion...you might want to include some appropriate navigation buttons. Educators would be interested to know that the movement from Phase I to Phase V indicates progress from the lower to higher mental functions, and reveals how learners contribute toward the construction of knowledge (Gunawardena, 1999).
The aforementioned unit of analysis can also be classified, by educators evaluating the extent of student involvement in the online community, as an interactive type of social presence (Rourke et al., 1999) since it expresses mutual attention and awareness, by referring explicitly to the contents of messages by others. Evaluating the social presence of a learning community would give educators an idea of how the relationships among the community members are developing. This allows educators to step in, for example, to help develop the relationships by making the group interactions appealing and fun to all.
To evaluate the students‘ thinking skills and levels of information processing, the Henri (1992) and Newman et al. (1997) models would be used. These models indicate if the thinking skills exhibited by the students represent a surface or an in depth level of information processing. If the thinking is surface, educators can also discover the reasons for it from the two models. Some of the reasons for surface level thinking include students not justifying their judgments or comments, or proposing a solution with little details or explanations. Henri‘s (1992) and Newman‘s et al. (1997) models thus offer educators a valuable tool to diagnose and help improve their students‘ quality of thinking.
Discussion
Although the aforementioned models are very useful to educators, it is also important to note that there are some drawbacks associated with them. Educators need to be aware of these drawbacks so that they can deal with them appropriately. In this paper, we highlight three of the most common drawbacks.
The first common drawback is the unreliable use of the unit of analysis (Rourke et al., 1999). Krippendoff (1980) described the unit of analysis as a discrete element of text that is observed, recorded, and thereafter considered data. One way is to take the learners‘ online message postings and analyse each posting in turn, with reference to the threads of discussion topics (as used in Gunawardena‘s et al., 1997 model). In this case, the messages are the units of analysis. This method, though simple to use, is not entirely perfect, as online postings usually contain more than one idea or thought. An alternative is the "thematic unit", which is defined by Budd, Thorp, and Donohue (1967) as "a single thought unit or idea unit that conveys a single item of information extracted from a segment of content" (p. 34). Thematic units as adopted by Henri‘s (1992) model reflect the logic of the indicators, but resist reliable and consistent identification (Howell-Richardson & Mellar, 1996). Yet another alternative (Rourke et al., 1999) is to combine the flexibility of the thematic unit with the identification attributes of a syntactical unit (e.g. a sentence, phrase or paragraph). Nonetheless, despite the fact that many units of analysis have been experimented with, none has been sufficiently reliable, valid and efficient to achieve pre-eminence (Rourke et al., 1999). Krippendorf (1980) concedes that, ultimately, the choice of the unit of analysis "involves considerable compromise" (p. 64) between meaningfulness, productivity, efficiently, and reliability.
The second common drawback in using these models is the high degree of subjectivity involved in discriminating the data and putting them into the correct categories. For example, it is difficult to distinguish one type of cognitive or metacognitive data from the other using Henri‘s cognitive model, because of the ambiguities and overlaps in the indicators of the cognitive skills (Bullen, 1997; Gunawardena et al., 1997). As a result, it becomes both very time consuming to analyse the online discussion transcripts using the models, and difficult to achieve high reliability (the consistency of results for the same data at different times or under different conditions, such as when coded by different people).
Since high reliability is desirable, how should one go about attaining it? We propose one possible means - reproducibility (inter-coder reliability). Inter-coder reliability can be defined as "the extent to which different coders, each coding the same content, come to the same coding decisions" (Rourke, Anderson, Garrison & Archer, 2001). Two educators should do the analysis independently and have the results cross examined by one another. But prior to doing the actual analysis, we recommend that the educators do a "sample exercise" on other messages to familiarise themselves with the models. Once the educators are comfortable with the models, they can then code and categorise the actual messages independently. The results may then be compared and the inter-coder reliability reported using two common methods: the percent agreement statistic and Cohen‘s kappa. The former statistic refers to the number of agreements per total number of coding decisions (Rourke et al., 2001). It is calculated using Holsti‘s (1969) coefficient of reliability:
2m/(n1 + n2) Where:
m  = number of coding decisions which the codes agree
n1 = number of coding decisions made by the first coder
n2 = number of coding decisions made by the second coder
Cohen‘s kappa, on the other hand, is a chance corrected measure of inter-coder reliability that assumes two coders, n cases, and m mutually exclusive and exhaustive nominal categories (Capozzoli, McSweeney, & Sinha, 1999). The formula for it is:
K = (Fo - Fc) / (N - Fc) Where: N  = the total number of judgments made by each coder
Fo = the number of judgments on which the coders agree
Fc = the number of judgments for which the agreement is expected by chance.
(See Capozzoli et al., 1999; and Cohen, 1960 for further discussion).
For percent agreement figures, Riffe, Lacy, and Fico (1998) stated that "a minimum level of 80% is usually the standard" (p. 128), while for Cohen‘s kappa, values exceeding 0.75 suggest strong agreement above chance, values in the range of 0.40 to 0.75 indicate fair levels of agreement above chance, and values 0.40 are indicative of poor agreement above chance levels (Fleiss, 1981). Any discrepancies in coding decisions should be discussed and negotiated by the coders until mutual agreement is reached.
The third drawback associated with the use of the aforementioned models is the inability of these models to evaluate the interactions, cognitive processes and roles of "passive learners". Passive learners, as found in a study by Sutton (2000), do not participate often in the discussion but consider themselves to have learned a lot from reading and reflecting on the comments and responses posted by others. Nonetheless, there are alternative means to evaluate the interactions, cognitive processes and roles of passive learners.
One possible method to evaluate the interactions and roles of passive learners is to use certain asynchronous discussion forums (such as the Knowledge Community course management software) that are able to capture the number of times these learners have read the messages posted by others. By using this feature, an educator will know for certain whether these learners are actively reading about the issues presented in the online discussion, albeit in a quiet way, or they are truly uninvolved at all. Educators can then take the appropriate steps to encourage them in their participation. To evaluate the cognitive processes of the "passive learners", educators may want to use other forms of assessment, e.g. projects, assignments, and interviews.
Conclusion
Asynchronous discussion forums can provide a platform for online learners to communicate with one another easily, without the constraint of place and time. In an attempt to evaluate the online learning communities of asynchronous discussion forums, we adopted activity theory to identify the various evaluation issues. Three broad issues were described in this paper - interactions among the online participants, cognitive skills of the learners, and the roles of the online moderators or instructors. Seven models were then listed out to address each of these three issues. The drawbacks of each model are also highlighted and possible means to overcome them are discussed. References
Ahern, T. C., Peck, K. & Laycock, M. (1992). The effects of teacher discourse in computer-mediated discussion. Journal of Educational Computing Research, 8(3), 291-309.
Barab, S. A. & Duffy, T. (2000). Architecting participatory learning environments. In D. Jonassen & S. Land (Eds.), Theoretical foundations of learning environments. Hillsdale, NJ: Lawrence Erlbaum Associates.
Bielaczyc, K. & Collins, A. (1999). Learning communities in classrooms: A reconceptualization of educational practice. In C. Reigeluth (Ed), Instructional design theories and models: Volume II (pp. 269-292). Hillsdale, NJ: Lawrence Erlbaum Associates.
Budd, R. & Donohue, L. (1967). Content analysis of communication. New York: Macmillan.
Capozzoli, M., McSweeney, L. & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. The Canadian Journal of Statistics, 27(1), 3-23.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.
Cole, M. & Engeström, Y. (1993). A cultural-historical approach to distributed cognition. In G. Salomon (Ed), Distributed Cognitions: Psychological and Educational Considerations (pp. 1-46). Cambridge University Press, New York.
Collins, A., Brown, J. S. & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453-494). Hillsdale, NJ: Erlbaum.
Gunawardena, C.N., Lowe, C.A. & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal Educational Computing Research, 17(4), 397-431.
Gunawardena, C. N. (1999). The challenge of designing and evaluating ‘interaction‘ in web-based distance education. (ERIC Document Reproduction Service No. ED448718)
Gunawardena, C., Carabajal, K. & Lowe, C. A. (2001). Critical analysis of models and methods used to evaluate online learning networks. (ERIC Document Reproduction Service No. ED456159)
Harasim, L. (1989). On-line education as a new domain. In R. D. Mason & A. R. Kay (Eds), Mindweave: Communication, Computers and Distance Education. Oxford: Pergamon Press, 1989.http://icdl.open.ac.uk/literaturestore/mindweave/chap4.html
Henri, F. (1992). Computer conferencing and content analysis. In A.R. Kaye (Ed). Collaborative learning through computer conferencing: The Najaden papers, 117-136. Berlin: Springer-Verlag.
Holsti, O. (1969). Content analysis for the social sciences and humanities. Don Mills: Addision-Wesley Publishing Company.
Howell-Richardson, C. & Mellar, H. (1996). A methodology for the analysis of patterns of participation within computer mediated communication courses. Instructional Science, 24, 47-69.
Hsu, J. F., Chen, D. & Hung, D. (2000). Learning theories and IT: The computer as a tutor. In M. D. Williams (Ed), Integrating technology into teaching and learning (pp. 71-92). Prentice-Hall, Singapore.
Hung, D. & Chen, D. (2000). Appropriating and negotiating knowledge: Technologies for a community of learners. Educational Technology, 40(3), 29-32.
Hung, D. & Wong, A. (2000). Activity theory as a framework for project work in learning environments. Educational Technology, 40(2), 33-37.
Jonassen, D. H. (2002). Learning as activity. Educational Technology, 42(2), 45-51.
Kanuka, H. & Anderson, T. (1998). On-line social interchange, discord and knowledge construction. Journal of Distance Education, 13(1), 57-74.http://cade.icaap.org/vol13.1/kanuka.html
Kirkley, S. E., Savery, J.R. & Grabner-Hagen, M. M. (1998). Electronic teaching: Extending classroom dialogue and assistance through e-mail communication. In C. J. Bonk & K. S. King (Eds.), Electronic Collaborators: Learner-Centered Technologies for Literacy, Apprenticeship, and Discourse (pp. 209-232). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.
Krippendoff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills: Sage Publications.
Kuutti, K. (1996). Activity theory as a potential framework for human-computer interaction research. In B. A. Nardi (Ed), Context and consciousness: Activity theory and human-computer interaction (pp.17-44). Cambridge, MA: MIT Press.
Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1-6.
Newman, D.R., Johnson, C., Webb, B. & Cochrane, C. (1997). Evaluating the quality of learning in computer supported cooperative learning. Journal of the American Society of Information Science, 48, 484-495.
Pang, M. N. & Hung, D. (2001). Activity theory as a framework for analyzing CBT and E-learning environments. Educational Technology, 41(4), 36-42.
Paulsen, M.F. (1995). Moderating educational computer conferences. In Z. L. Berge and M. P. Collins (Eds), Computer mediated communication and the online classroom: Vol.3. Distance Learning (pp. 81-89). Cresskill, NJ Hampton Press, Inc.
Riffe, D., Lacy, S. & Fico, F. (1998). Analyzing media messages: Quantitative content analysis. New Jersey: Lawrence Erlbaum Associates, Inc.
Roehler, L. R. & Cantlon, D. J. (1997). Scaffolding: A powerful tool in social constructivist classrooms. In K. Hogan & M. Pressley (Eds), Scaffolding student learning (pp. 6-42). Cambridge, Massachusetts: Brookline Books.
Rourke, L., Anderson, T., Garrison, D. R. & Archer, W. (1999). Assessing social presence in asynchronous text-based computer conferencing. Journal of Distance Education, 14(2). [viewed 9 May 2003, verified 27 Jul 2003]http://cade.icaap.org/vol14.2/rourke_et_al.html
Rourke, L., Anderson, T., Garrison, D. R. & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12, 8-22.
Rovai, A. P. (2002). Development of an instrument to measure classroom community. Internet and Higher Education, 5, 197-211.
Squire, K. D. & Johnson, C. B. (2000). Supporting distributed communities of practice with interactive television. Educational Technology Research and Development, 48(1), 23-43.
Sutton, L.A. (2000). Vicarious interaction in a course enhanced through the use of computer-mediated communication. Unpublished PhD dissertation. Arizona State University.
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of college attrition. (2nd ed.). Chicago, IL: University of Chicago Press.