Great feedback - I will digest and integrate into my plan. I see what you mean about mixed messages re: summative/formative with the student reactions to learning tool. At MIT whenever a new methodology is introduce it's the policy to have a summative after learning has taken place (at the end of the course). My initial reaction is to indicate this policy in my plan and include a moderation form used by MIT as an example. Thanks for the link to Otago. Will study this before finalising.
Cheers
Jennifer
Sunday, May 11, 2008
Saturday, April 26, 2008
Catching up - weeks 5 and 6
Hi
what a lot of reading!!! Here's my results - lots of food for thought.
Type of Evaluation for my project
Background
The eLearning module under scrutiny is in the management discipline. The course is at level 4 (NZQA), but there are few texts, and no interactive tools available (both locally and internationally) pitched at this level. The intention is to create a learning tool to enable students to:
a) Understand the formation and development organisational structures
b) Be able to apply organisational structures from the learning tool to a range of existing organisations within the New Zealand context. Students would be able to use an interactive model –students could manipulate structures to meet a range of criteria after viewing the introductory models.
c) Feedback is available to students on the veracity of their answers.
d) Multiple attempts are available.
e) Students with Internet access could use this learning tool outside the Polytechnic environment.
The intention is to deliver this learning module as part of a Web enhanced – integration into a course currently delivered f2f, but in future possibility of blended/distance learning course. Also this project can be used in a number of introductory management courses. http://elg.massey.ac.nz. eLearning guidelines for New Zealand. 25.03.08
The purpose of the feedback from students and teachers is to enable this learning tool to be developed so it is:
· User friendly to students
· Effective as a learning tool
· Enables students to transfer theoretical knowledge to a range of real life situations.
Needs analysis
I have already conducted this phase of the project in several stages:
1. Students manipulated a card system to produce appropriate organisational models after f2f teaching on this topic
2. Students viewed (and had continuous access to) a PowerPoint presentation on the standard range of organisational structures with supplementary notes to reinforce transition phases and rationale for restructuring.
These learning tools have been enthusiastically received by the students ans have a positive effect on their learning outcomes evidenced by:
· Anonymous feedback forms with open-ended questions
· Student results in summative evaluation essays compared to students in earlier courses who did not have access to Phase 1 (the cards) or Phase 2 (the PowerPoint).
These methods were conducted prior to this evaluation course and therefore do not have the rigour that could now be applied. A critique of these evaluation methods indicated that the results are both anecdotal and unscientific. One issue I have is that the course is currently delivered to a small cohort (6 – 12 students). I would appreciate advice on this conundrum.
Methods of Evaluation
Initially the evaluation would be formative– involve gathering feedback from users and other relevant groups during development and implementation process – so improvements and adjustments can be used (www.wikieducator.org). The rationale being that this learning tool is in the development stage. I am inclined to use three methods which are conducted in stages. Although the comment by Stake (in Lockee, More & Burton) when formative vs summative is discussed may change my concept making soup: when the chef tastes the soup it’s formative; when the diners (or a food critic) taste the soup it’s summative may well change my mind!
Method 1
Once the learning tool is developed by our in-house designers an Expert review is conducted by other lecturers in the management field to test for content relevancy and accuracy (Lockee, B., Moore, M., & Burton, J.: 2002). At this stage I need to research in more detail for an appropriate model but I am thinking of an open-ended questionnaire.
Method 2
A field trial using the student group on the completed package (Lockee, B., Moore, M., & Burton, J.: 2002). This would be based on the Kirkpatrick’s Model for Summative evaluation (although this is summative I really think it is a good evaluation model). The idea is to test the effectiveness (from the learner’s perspective) of the knowledge transfer. This would be conducted at the end of the initial session by students by inclusion of a feedback form. Integration at this stage would increase the likelihood of a student response. The form would include a Likert Scale analysis (followed by an opportunity for student comment in an open-ended format). Involving the whole class in the survey would not be an issue for analysis due to the small number of students currently studying this unit.
Method 3
Based on the Observation Sessions used by the Analysis and Evaluation Group (2005). My rationale is that this learning tool is a new approach to the management topic and uses quite complex manipulation. It is essential that the learning tool is user friendly. This method will also allow an element of peer review (from the teacher’s perspective). Jones in Harvey (1998) cautions on the Role of the Observer and provides a number of very relevant guidelines for me to consider when conducting this stage.
By adopting these multiple methods I hope to emulate the Eclectic-Mixed Methods-Pragmatic Paradigm. For example, usability would be measured by the students in their feedback and also by the lecturer observation (Hegarty: 2003).
References
Analysis and Evaluation Group (2005). Evaluation Plan for Usability Testing of Module Prototypes. Retrieved March 24, 208 from www.wikieducator.org
e-Learning guidelines for New Zealand. B. Who will use the e-Learning guidelines? Retrieved March 22, 2008 from http://elg.massey.ac.nz
Lockee, B., Moore, M., Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, Number 1, 2002.
Hergarty, B. (2008). Types of Evaluation Models. Otago University.
Jones, C. (1998) Evaluation Cookbook. Harvey, J. Ethnography. (pp34-35) Learning Technology Dissemination Initiative. Scottish Higher Education Funding Council. Retrieved April 15, 2008 from www.icbl.hw.ac.uk/ltdi/cookbook
WikiEd. Evaluating the impacts of eLearning/Evaluation methods. Retrieved April 20, 2008 from www.wikieducator.org/WikiEdProfessional_eLearning_Guidebook/Evaluating
Summary
Evaluation Plan for Usability Testing of Module Prototypes – Prepared by Analysis and Evaluation Group December 2005
The project is to develop interactive information literacy modules for the tertiary sector in New Zealand. The needs analysis clearly shows a gap in available tools to address both student and lecturer requirements. The purpose of the modules is to allow a diverse range of students to readily access and use these modules to enhance their learning outcomes. The objectives are to test the usability and design of the modules is appropriate for the intended learners.
In order to address the issue of validity three institutions participated in the initial study. It was envisaged that the initial study would also be rolled out over other institutions in the tertiary sector.
The study clearly identified the areas of significance to be tested:
· User reaction
· Engagement with learners
· Appropriateness for the New Zealand tertiary sector
· Ease of updating and re-contextualising
The multiple method design employed here is rigorous. Both student and expert feedback are sought. The range of data collection is comprehensive; questionnaire, interview, usability observation and expert review. The triangulation of data (Hegarty, 2008) with the pragmatic approach of the Eclectic-Mixed Methods orientation (Reeves, 2006) should result in an accurate review. Combinations of formative and summative evaluations are conducted with both qualitative and quantitative methods employed.
A particularly interesting aspect of the evaluation is the methodology employed in the Usability questionnaire for both students and staff. Both staff and students are requested to ‘voice’ their reactions as they ‘test’ the modules. This is recorded and is in addition to the written questionnaire. Although the rationale for this method is ‘to ensure the observer doesn’t miss anything’ this would add another dimension to the testing. It is not clear if these recordings are attributable, if software such as CODE-A-TEXT (McAteer in Harvey, 1998) is used.
The pre-testing information sheets provide excellent information to the participants covering such topics as instruction, time-line and confidentiality. The content of the questionnaires are divided into three sections; usability, instructional design, content and effectiveness for learning.
The design of this evaluation should be effective in achieving its purpose.
References
Hergarty, B. (2008). Types of Evaluation Models. Otago University.
Reeves, T. (2006) Educational paradigms. Retrieved from cache February 5, 2006 www.educationau.edu.au/archives/CP/REFS/reeves_paradigms.htm
McAteer, E. (1998) Evaluation Cookbook. Harvey, J. Interviews. (pp40-43) Learning Technology Dissemination Initiative. Scottish Higher Education Funding Council. Retrieved April 15, 2008 from www.icbl.hw.ac.uk/ltdi/cookbook
Relevance to my project
As I intend to use a multiple method evaluation method for my project the above design is relevant. I particularly found the observation section very interesting, although I doubt that recording the sessions are possible in my context. I would consult with my IT experts on this! The triangulation concept is definitely one I shall apply as I need expert advice on design, feedback on subject content and then the student reaction to the actual learning tool. I really like the questionnaire design – it seems to cover all bases - and would consider using these questions. This is a really useful model for me to follow – I would like to see the results of this evaluation – is it available??
Cheers
Jennifer
what a lot of reading!!! Here's my results - lots of food for thought.
Type of Evaluation for my project
Background
The eLearning module under scrutiny is in the management discipline. The course is at level 4 (NZQA), but there are few texts, and no interactive tools available (both locally and internationally) pitched at this level. The intention is to create a learning tool to enable students to:
a) Understand the formation and development organisational structures
b) Be able to apply organisational structures from the learning tool to a range of existing organisations within the New Zealand context. Students would be able to use an interactive model –students could manipulate structures to meet a range of criteria after viewing the introductory models.
c) Feedback is available to students on the veracity of their answers.
d) Multiple attempts are available.
e) Students with Internet access could use this learning tool outside the Polytechnic environment.
The intention is to deliver this learning module as part of a Web enhanced – integration into a course currently delivered f2f, but in future possibility of blended/distance learning course. Also this project can be used in a number of introductory management courses. http://elg.massey.ac.nz. eLearning guidelines for New Zealand. 25.03.08
The purpose of the feedback from students and teachers is to enable this learning tool to be developed so it is:
· User friendly to students
· Effective as a learning tool
· Enables students to transfer theoretical knowledge to a range of real life situations.
Needs analysis
I have already conducted this phase of the project in several stages:
1. Students manipulated a card system to produce appropriate organisational models after f2f teaching on this topic
2. Students viewed (and had continuous access to) a PowerPoint presentation on the standard range of organisational structures with supplementary notes to reinforce transition phases and rationale for restructuring.
These learning tools have been enthusiastically received by the students ans have a positive effect on their learning outcomes evidenced by:
· Anonymous feedback forms with open-ended questions
· Student results in summative evaluation essays compared to students in earlier courses who did not have access to Phase 1 (the cards) or Phase 2 (the PowerPoint).
These methods were conducted prior to this evaluation course and therefore do not have the rigour that could now be applied. A critique of these evaluation methods indicated that the results are both anecdotal and unscientific. One issue I have is that the course is currently delivered to a small cohort (6 – 12 students). I would appreciate advice on this conundrum.
Methods of Evaluation
Initially the evaluation would be formative– involve gathering feedback from users and other relevant groups during development and implementation process – so improvements and adjustments can be used (www.wikieducator.org). The rationale being that this learning tool is in the development stage. I am inclined to use three methods which are conducted in stages. Although the comment by Stake (in Lockee, More & Burton) when formative vs summative is discussed may change my concept making soup: when the chef tastes the soup it’s formative; when the diners (or a food critic) taste the soup it’s summative may well change my mind!
Method 1
Once the learning tool is developed by our in-house designers an Expert review is conducted by other lecturers in the management field to test for content relevancy and accuracy (Lockee, B., Moore, M., & Burton, J.: 2002). At this stage I need to research in more detail for an appropriate model but I am thinking of an open-ended questionnaire.
Method 2
A field trial using the student group on the completed package (Lockee, B., Moore, M., & Burton, J.: 2002). This would be based on the Kirkpatrick’s Model for Summative evaluation (although this is summative I really think it is a good evaluation model). The idea is to test the effectiveness (from the learner’s perspective) of the knowledge transfer. This would be conducted at the end of the initial session by students by inclusion of a feedback form. Integration at this stage would increase the likelihood of a student response. The form would include a Likert Scale analysis (followed by an opportunity for student comment in an open-ended format). Involving the whole class in the survey would not be an issue for analysis due to the small number of students currently studying this unit.
Method 3
Based on the Observation Sessions used by the Analysis and Evaluation Group (2005). My rationale is that this learning tool is a new approach to the management topic and uses quite complex manipulation. It is essential that the learning tool is user friendly. This method will also allow an element of peer review (from the teacher’s perspective). Jones in Harvey (1998) cautions on the Role of the Observer and provides a number of very relevant guidelines for me to consider when conducting this stage.
By adopting these multiple methods I hope to emulate the Eclectic-Mixed Methods-Pragmatic Paradigm. For example, usability would be measured by the students in their feedback and also by the lecturer observation (Hegarty: 2003).
References
Analysis and Evaluation Group (2005). Evaluation Plan for Usability Testing of Module Prototypes. Retrieved March 24, 208 from www.wikieducator.org
e-Learning guidelines for New Zealand. B. Who will use the e-Learning guidelines? Retrieved March 22, 2008 from http://elg.massey.ac.nz
Lockee, B., Moore, M., Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, Number 1, 2002.
Hergarty, B. (2008). Types of Evaluation Models. Otago University.
Jones, C. (1998) Evaluation Cookbook. Harvey, J. Ethnography. (pp34-35) Learning Technology Dissemination Initiative. Scottish Higher Education Funding Council. Retrieved April 15, 2008 from www.icbl.hw.ac.uk/ltdi/cookbook
WikiEd. Evaluating the impacts of eLearning/Evaluation methods. Retrieved April 20, 2008 from www.wikieducator.org/WikiEdProfessional_eLearning_Guidebook/Evaluating
Summary
Evaluation Plan for Usability Testing of Module Prototypes – Prepared by Analysis and Evaluation Group December 2005
The project is to develop interactive information literacy modules for the tertiary sector in New Zealand. The needs analysis clearly shows a gap in available tools to address both student and lecturer requirements. The purpose of the modules is to allow a diverse range of students to readily access and use these modules to enhance their learning outcomes. The objectives are to test the usability and design of the modules is appropriate for the intended learners.
In order to address the issue of validity three institutions participated in the initial study. It was envisaged that the initial study would also be rolled out over other institutions in the tertiary sector.
The study clearly identified the areas of significance to be tested:
· User reaction
· Engagement with learners
· Appropriateness for the New Zealand tertiary sector
· Ease of updating and re-contextualising
The multiple method design employed here is rigorous. Both student and expert feedback are sought. The range of data collection is comprehensive; questionnaire, interview, usability observation and expert review. The triangulation of data (Hegarty, 2008) with the pragmatic approach of the Eclectic-Mixed Methods orientation (Reeves, 2006) should result in an accurate review. Combinations of formative and summative evaluations are conducted with both qualitative and quantitative methods employed.
A particularly interesting aspect of the evaluation is the methodology employed in the Usability questionnaire for both students and staff. Both staff and students are requested to ‘voice’ their reactions as they ‘test’ the modules. This is recorded and is in addition to the written questionnaire. Although the rationale for this method is ‘to ensure the observer doesn’t miss anything’ this would add another dimension to the testing. It is not clear if these recordings are attributable, if software such as CODE-A-TEXT (McAteer in Harvey, 1998) is used.
The pre-testing information sheets provide excellent information to the participants covering such topics as instruction, time-line and confidentiality. The content of the questionnaires are divided into three sections; usability, instructional design, content and effectiveness for learning.
The design of this evaluation should be effective in achieving its purpose.
References
Hergarty, B. (2008). Types of Evaluation Models. Otago University.
Reeves, T. (2006) Educational paradigms. Retrieved from cache February 5, 2006 www.educationau.edu.au/archives/CP/REFS/reeves_paradigms.htm
McAteer, E. (1998) Evaluation Cookbook. Harvey, J. Interviews. (pp40-43) Learning Technology Dissemination Initiative. Scottish Higher Education Funding Council. Retrieved April 15, 2008 from www.icbl.hw.ac.uk/ltdi/cookbook
Relevance to my project
As I intend to use a multiple method evaluation method for my project the above design is relevant. I particularly found the observation section very interesting, although I doubt that recording the sessions are possible in my context. I would consult with my IT experts on this! The triangulation concept is definitely one I shall apply as I need expert advice on design, feedback on subject content and then the student reaction to the actual learning tool. I really like the questionnaire design – it seems to cover all bases - and would consider using these questions. This is a really useful model for me to follow – I would like to see the results of this evaluation – is it available??
Cheers
Jennifer
Saturday, March 29, 2008
Week 4 - wow this was a hard one!
Week 3 post-script
Checked on Manukau Institute Policy re – guidelines. Historically we are not seen as a distance/eLearning based institute. Hence have not adopted the NZ ELG. Current research ongoing into ‘how our eLearning supports current student needs in a flexible delivery environment’. Suspect that with the contraction of distance learning (re Canterbury Poly scandal before last election) and the repercussions for distance learning being delivered by Polys in NZ we are being very careful. TEAC current policy is that we ‘stick to our own catchment area’ eLearning does not comply!
Notes for Blog – Week 4 – Educational Paradigms
Paradigm 1 - Analytic-Empirical-Positivist-Quantitative Paradigm
Impossible to separate parts from wholes – cause and effect relationships too complex to measure in a scientific manner. This method was used by the University of Michigan (U-M) School of Dentistry when they applied two of Flagg’s formative evaluation measures (Johnson, Brittain, Glowacki & Van Ittersum). In Pilot 1 – Media format, lends itself to this approach. Student preferences could be easily identified using their logs and student surveys. The technological basis of the question lends itself to this quantitative approach.
I find it really difficult to ‘get my head around’ the application of this paradigm in an all-encompassing evaluation methodology. Firstly I have to admit that my personal attitudes play a role in my philosophical attitude. At this stage I could say ‘I rest my case’ – evaluator bias! Using a ‘scientific’ approach surely requires control groups, against which those sampled can be measured. Hegarty (2003) identifies the ethical dilemma in providing a group with different learning tools and running a control to measure effectiveness. Hegarty (2003) also references the possibility of subject bias when conducting this evaluation using self-selected groups.
The purpose of the research is to primarily identify the preferred delivery technology. Without a greater sample survey beyond this self-selected and specialist group can these results be applied to a more general audience? ‘Students prefer pod-casts as a supplementary learning tool.’ Does the application of this paradigm to an evaluation methodology limit the reference field?
Paradigm 2 – Constructivist-Heremeneutic-Interpretivist-Qualitative Paradigm
I use this evaluation method every time I stand in a classroom, or in an elearning context, when I give feedback to a student. My query with expanding this to the level of evaluation in a group situation is that it is impossible! Surely we all carry our context with us? How is it possible to generalise? An example is student evaluation of lecturing staff – a regular analysis on our teaching. The outlier is a relevant sample. I agree with the general tenor of this philosophy but cannot reconcile how this can be used as a realistic evaluation tool.
Paradigm 3 – Critical Theory-Neomarxist-Postmodern-Praxis Paradigm
I find this approach very seductive. Especially relevant when teaching multi-cultural an socially diverse groups where such diversity is an every-day issue. One contention I have with course design is the impossibility designing a neutral course – one example is perhaps to evaluate the Microsoft Helpline. I would be really interested in comments on their offerings being evaluation using this paradigm. As a deliverer of an eLearning program on a world-wide basis they would surely need to consider this paradigm. Effectively they do not contextualise (beyond reference back to the program) any teaching. I wonder if this is the Neomarxist approach in action – would they be horrified to consider this?
Paradigm 4 – Eclectic-Mixed Methods-Pragmatic Paradigm
Reeves’ bias towards ‘cherry picking’ is obvious throughout this article and the concluding paradigm comes as no surprise. ‘Horses for courses’ is his mantra. The triangulation approach to educational evaluation is a common theme throughout much analysis. An example is the 360◦ approach used in management/human resource surveys – see evaluation from all angles.
Comparison
A brief flick through the Types of Evaluation Models on the web proved a frustrating experience as so many articles are subscriber only. Most models refer to entire programs rather than discrete learning tools, which is my focus, so application required a mind-shift on my part. Another feature of the articles freely available is, perhaps because many are from the United States, the emphasis on ROI, rather than the student-centred approach focus of the ELG guidelines.
Payne (2004) in Johnson’s (2008) lecture on Evaluation Models has a moment of insight where he makes the analogy – ‘models are to paradigms as hypotheses are to theories’. Johnson’s (2008) lecture notes place Patton’s Model under the heading ‘Management Models’ and outline Patton’s (1997) emphasis on the utility of evaluation findings in the context of program design with a focus on key stakeholders key issues and the need for evaluators to ‘work closely with intended users’.
Comment: I guess from my perspective this is a no-brainer. How could one evaluate anything without involving the customer?
Under the heading ‘Anthropological Models’ Johnson (2008)also gives a delightful insight to Qualitative Methods
Tend to be useful for describing program implementation, studying process, studying participation, getting program participant’s views or opinions about program impact… identifying program strengths and weaknesses’
More significantly Johnson (2008) highlights the utility of this method, over specific objective research in discovering ‘unintended outcomes’.
I must admit to some confusion when I read Lance Hogan’s (2007) commentary on Management Orientated approach. This certainly does not correspond with Johnson’s (2008) typology! Lance Hogan (2007) provided some interesting critiques in his literature review. His ‘Participant-Oriented Approach’, where the evaluator engages with the stakeholder as a problem-solving partner seems logical.
Comment: I am assuming that he means the students (participants) evaluator and designers.
On a lighter note – and relevant to my learning tool the article by Colombia University (2008) emphasised the faculty and students working together to produce a mutually satisfying learning experience through a heuristics review using expert users and a breakdown of several layers of student review; formative, effectiveness, impact and maintenance. It was easy to understand and I guess follows Stakes Responsiveness Model (look for your comment on this Browyn). In particular the feedback section was particularly instructive. I would assume that this has used a triangulation method of gaining the data.
This was a toughie! I felt more confused – although it did open my mind up – at the end! Hopefully Evaluation Methods is more cut and dried and appeals to my linear nature
Cheers
Jennifer
References
Brittain, S., Glowacki, P., Vam Ittersum, J., Johnson, L. (2006). Formative evaluation strategies helped identify a solution to a learning dilemma. Retrieved March, 14, 2008, from http://connect.educase.edu/Library/EDUCASE+Quarterly/PodcastingLectures/39987
Flagg, B. N.,Formative Evaluation for Education Technologies. (Hillsdale, N.J.: Erlbaum Associates, 1990).
Hegarty, B., (2003). Experimental and Multiple Methods Evaluation Models. Retrieved March, 25, 2008, from
http://wikieducator.org/Evaluation)_of_eLearning_for_Best_Practice
Johnson, B., (2008) Lecture Two: Evaluation Models. Retrieved, March 26, from www.southalabama.edu/coe/bset/johnson/660lectures/lec2doc.
Lance Hogan, R. (2007). The Historical Development of Program Evaluation: Exploring the Past and Present. Online Journal of Workforce Education and Development. Retrieved March, 28, 2008 from
http://wed.siu.edu/Journal/VolIInum4/Article_4.pdf
Six Facets of Instructional Product Evaluation. Retrieved March, 27, 2008 from http://ccnmtl.colombia.edu/seminars/reeves/CCNMTLFormative.ppt
Checked on Manukau Institute Policy re – guidelines. Historically we are not seen as a distance/eLearning based institute. Hence have not adopted the NZ ELG. Current research ongoing into ‘how our eLearning supports current student needs in a flexible delivery environment’. Suspect that with the contraction of distance learning (re Canterbury Poly scandal before last election) and the repercussions for distance learning being delivered by Polys in NZ we are being very careful. TEAC current policy is that we ‘stick to our own catchment area’ eLearning does not comply!
Notes for Blog – Week 4 – Educational Paradigms
Paradigm 1 - Analytic-Empirical-Positivist-Quantitative Paradigm
Impossible to separate parts from wholes – cause and effect relationships too complex to measure in a scientific manner. This method was used by the University of Michigan (U-M) School of Dentistry when they applied two of Flagg’s formative evaluation measures (Johnson, Brittain, Glowacki & Van Ittersum). In Pilot 1 – Media format, lends itself to this approach. Student preferences could be easily identified using their logs and student surveys. The technological basis of the question lends itself to this quantitative approach.
I find it really difficult to ‘get my head around’ the application of this paradigm in an all-encompassing evaluation methodology. Firstly I have to admit that my personal attitudes play a role in my philosophical attitude. At this stage I could say ‘I rest my case’ – evaluator bias! Using a ‘scientific’ approach surely requires control groups, against which those sampled can be measured. Hegarty (2003) identifies the ethical dilemma in providing a group with different learning tools and running a control to measure effectiveness. Hegarty (2003) also references the possibility of subject bias when conducting this evaluation using self-selected groups.
The purpose of the research is to primarily identify the preferred delivery technology. Without a greater sample survey beyond this self-selected and specialist group can these results be applied to a more general audience? ‘Students prefer pod-casts as a supplementary learning tool.’ Does the application of this paradigm to an evaluation methodology limit the reference field?
Paradigm 2 – Constructivist-Heremeneutic-Interpretivist-Qualitative Paradigm
I use this evaluation method every time I stand in a classroom, or in an elearning context, when I give feedback to a student. My query with expanding this to the level of evaluation in a group situation is that it is impossible! Surely we all carry our context with us? How is it possible to generalise? An example is student evaluation of lecturing staff – a regular analysis on our teaching. The outlier is a relevant sample. I agree with the general tenor of this philosophy but cannot reconcile how this can be used as a realistic evaluation tool.
Paradigm 3 – Critical Theory-Neomarxist-Postmodern-Praxis Paradigm
I find this approach very seductive. Especially relevant when teaching multi-cultural an socially diverse groups where such diversity is an every-day issue. One contention I have with course design is the impossibility designing a neutral course – one example is perhaps to evaluate the Microsoft Helpline. I would be really interested in comments on their offerings being evaluation using this paradigm. As a deliverer of an eLearning program on a world-wide basis they would surely need to consider this paradigm. Effectively they do not contextualise (beyond reference back to the program) any teaching. I wonder if this is the Neomarxist approach in action – would they be horrified to consider this?
Paradigm 4 – Eclectic-Mixed Methods-Pragmatic Paradigm
Reeves’ bias towards ‘cherry picking’ is obvious throughout this article and the concluding paradigm comes as no surprise. ‘Horses for courses’ is his mantra. The triangulation approach to educational evaluation is a common theme throughout much analysis. An example is the 360◦ approach used in management/human resource surveys – see evaluation from all angles.
Comparison
A brief flick through the Types of Evaluation Models on the web proved a frustrating experience as so many articles are subscriber only. Most models refer to entire programs rather than discrete learning tools, which is my focus, so application required a mind-shift on my part. Another feature of the articles freely available is, perhaps because many are from the United States, the emphasis on ROI, rather than the student-centred approach focus of the ELG guidelines.
Payne (2004) in Johnson’s (2008) lecture on Evaluation Models has a moment of insight where he makes the analogy – ‘models are to paradigms as hypotheses are to theories’. Johnson’s (2008) lecture notes place Patton’s Model under the heading ‘Management Models’ and outline Patton’s (1997) emphasis on the utility of evaluation findings in the context of program design with a focus on key stakeholders key issues and the need for evaluators to ‘work closely with intended users’.
Comment: I guess from my perspective this is a no-brainer. How could one evaluate anything without involving the customer?
Under the heading ‘Anthropological Models’ Johnson (2008)also gives a delightful insight to Qualitative Methods
Tend to be useful for describing program implementation, studying process, studying participation, getting program participant’s views or opinions about program impact… identifying program strengths and weaknesses’
More significantly Johnson (2008) highlights the utility of this method, over specific objective research in discovering ‘unintended outcomes’.
I must admit to some confusion when I read Lance Hogan’s (2007) commentary on Management Orientated approach. This certainly does not correspond with Johnson’s (2008) typology! Lance Hogan (2007) provided some interesting critiques in his literature review. His ‘Participant-Oriented Approach’, where the evaluator engages with the stakeholder as a problem-solving partner seems logical.
Comment: I am assuming that he means the students (participants) evaluator and designers.
On a lighter note – and relevant to my learning tool the article by Colombia University (2008) emphasised the faculty and students working together to produce a mutually satisfying learning experience through a heuristics review using expert users and a breakdown of several layers of student review; formative, effectiveness, impact and maintenance. It was easy to understand and I guess follows Stakes Responsiveness Model (look for your comment on this Browyn). In particular the feedback section was particularly instructive. I would assume that this has used a triangulation method of gaining the data.
This was a toughie! I felt more confused – although it did open my mind up – at the end! Hopefully Evaluation Methods is more cut and dried and appeals to my linear nature
Cheers
Jennifer
References
Brittain, S., Glowacki, P., Vam Ittersum, J., Johnson, L. (2006). Formative evaluation strategies helped identify a solution to a learning dilemma. Retrieved March, 14, 2008, from http://connect.educase.edu/Library/EDUCASE+Quarterly/PodcastingLectures/39987
Flagg, B. N.,Formative Evaluation for Education Technologies. (Hillsdale, N.J.: Erlbaum Associates, 1990).
Hegarty, B., (2003). Experimental and Multiple Methods Evaluation Models. Retrieved March, 25, 2008, from
http://wikieducator.org/Evaluation)_of_eLearning_for_Best_Practice
Johnson, B., (2008) Lecture Two: Evaluation Models. Retrieved, March 26, from www.southalabama.edu/coe/bset/johnson/660lectures/lec2doc.
Lance Hogan, R. (2007). The Historical Development of Program Evaluation: Exploring the Past and Present. Online Journal of Workforce Education and Development. Retrieved March, 28, 2008 from
http://wed.siu.edu/Journal/VolIInum4/Article_4.pdf
Six Facets of Instructional Product Evaluation. Retrieved March, 27, 2008 from http://ccnmtl.colombia.edu/seminars/reeves/CCNMTLFormative.ppt
Monday, March 24, 2008
Week 3 - still plugging on
http://elg.massey.ac.nz contends that to successfully implement e-learning guidelines this should be at an institution-wide level. I’m not sure (but will find out) if MIT has adopted specific guidelines re e-learning. As in this course I am asked to identify e-learning guidelines specific to my project I will continue on this track. Obviously if my institution has guidelines then my selection should be congruent with the global policy, but at this stage am boxing in the dark (Easter Holiday catch-up here).
Scenario: Unit Standard 16342 – Organisational Principles – Level 4
Students are required to demonstrate an understanding of Organisational Charts, their different types and the advantages and disadvantages to an organisation of applying these types.
Stage 1
Students view a presentation giving them basic information about different organisational structures – this is progressive from simple to complex structures
Stage 2
Students are given scenarios (based on above) and manipulate structures to form a suitable organisational chart for each structure. Feedback would be available to verify correct structures. Students would be given unlimited opportunities to trial their various structures.
Stage 3
Peer review – students discuss (online) their rationale for each structure chosen and the advantages and disadvantages to an organisation for their choice
At this stage this is envisioned as a formative exercise – but after trial period could transit to a summative evaluation as it meets the Unit Standard requirements.
My selected guidelines
1.1
TD1 – have ‘trialled’ this classroom. Found, when teaching this F2F, students have difficulty in understand the significance to an organisation of an inappropriate structure and also the transition (whether reactive/organic or planned restructure) from one type of structure to another. In my trial I initially used volunteers (with enthusiastic input from observing students) and groups to use whiteboard. I then designed a presentation (Stage 1) which students find useful as an initial learning tool. My initial research indicates that the intended learning outcomes would be achieved with a well-designed e-learning tool. At this introductory level in the Management discipline there are few resources available. The level is comparable to A’ level in the UK, and I have searched unsuccessfully for resources specific to this topic. Most resources are for Stage 1 undergraduate level which is far too complex. – (TD13)
TD3 – by placing the three stages on our eMIT site students would have the opportunity of exploring (at their own pace) the learning tool. Feedback on their submitted structure would enable them to revisit any inappropriate structure (TT 3).
A discussion board this give the less experienced student the support to enable deeper learning to take place. At this stage it is not planned (by MIT) to make this course web-based or enhanced – just web supported. Discussion on results would therefore be F2F. But I envisage that this could transit to a web-based course and therefore TD10 would be a possibility that I should consider when designing/deploying this Learning Tool.
MD 3 – I have already consulted our e-learning team and their input has been invaluable to me in adjusting the design and realising the possibilities of using an e-learning instrument for this discrete learning outcome.
Outside of the ELG I also found ‘Kirkpatrick’s Learning and Training Evaluation Theory’ http://www.e-learningguru.com/articles/art2_8.htm particularly interesting. The article (above) discusses its value in technology-based training. In particular the ‘Behaviour in the Workplace’ section – had transfer of knowledge/skills occurred? Although not directly related to my learning outcomes this is very relevant to many courses conducted in a Polytechnic environment where this transfer of knowledge is vital to the continued success of our courses.
Scenario: Unit Standard 16342 – Organisational Principles – Level 4
Students are required to demonstrate an understanding of Organisational Charts, their different types and the advantages and disadvantages to an organisation of applying these types.
Stage 1
Students view a presentation giving them basic information about different organisational structures – this is progressive from simple to complex structures
Stage 2
Students are given scenarios (based on above) and manipulate structures to form a suitable organisational chart for each structure. Feedback would be available to verify correct structures. Students would be given unlimited opportunities to trial their various structures.
Stage 3
Peer review – students discuss (online) their rationale for each structure chosen and the advantages and disadvantages to an organisation for their choice
At this stage this is envisioned as a formative exercise – but after trial period could transit to a summative evaluation as it meets the Unit Standard requirements.
My selected guidelines
1.1
TD1 – have ‘trialled’ this classroom. Found, when teaching this F2F, students have difficulty in understand the significance to an organisation of an inappropriate structure and also the transition (whether reactive/organic or planned restructure) from one type of structure to another. In my trial I initially used volunteers (with enthusiastic input from observing students) and groups to use whiteboard. I then designed a presentation (Stage 1) which students find useful as an initial learning tool. My initial research indicates that the intended learning outcomes would be achieved with a well-designed e-learning tool. At this introductory level in the Management discipline there are few resources available. The level is comparable to A’ level in the UK, and I have searched unsuccessfully for resources specific to this topic. Most resources are for Stage 1 undergraduate level which is far too complex. – (TD13)
TD3 – by placing the three stages on our eMIT site students would have the opportunity of exploring (at their own pace) the learning tool. Feedback on their submitted structure would enable them to revisit any inappropriate structure (TT 3).
A discussion board this give the less experienced student the support to enable deeper learning to take place. At this stage it is not planned (by MIT) to make this course web-based or enhanced – just web supported. Discussion on results would therefore be F2F. But I envisage that this could transit to a web-based course and therefore TD10 would be a possibility that I should consider when designing/deploying this Learning Tool.
MD 3 – I have already consulted our e-learning team and their input has been invaluable to me in adjusting the design and realising the possibilities of using an e-learning instrument for this discrete learning outcome.
Outside of the ELG I also found ‘Kirkpatrick’s Learning and Training Evaluation Theory’ http://www.e-learningguru.com/articles/art2_8.htm particularly interesting. The article (above) discusses its value in technology-based training. In particular the ‘Behaviour in the Workplace’ section – had transfer of knowledge/skills occurred? Although not directly related to my learning outcomes this is very relevant to many courses conducted in a Polytechnic environment where this transfer of knowledge is vital to the continued success of our courses.
Sunday, March 16, 2008
Week 2 post - slowly catching up!!
Importance of evaluation to me
I guess it’s a comfort blanket. If a course is evaluated it validates my delivery and give me confidence as a teacher in my materials and assessment instruments.
I would define evaluation in my context at Manukau Institute of Tech as;
· The course evaluation taken by all students at completion
· Pre and post moderation of assessments – this gives me a peer review
· Usability of my teaching methods – am I setting up barriers to learners
· If team teaching feedback from members of my team in development of teaching resources, delivery methods and assessments
· Feedback by students during the course
· Retention of students during the course
· My marking of student assessments – gives me feedback on teaching and course resources
· Student results
Methods used that are familiar
Observation – formative teaching review done at my request by a peer
Questionnaires – completed by students – partly in a checklist form with a range over 5 levels (poor – excellent) supported by comment area where students are prompted to respond to areas they particularly found helpful and areas where they felt improvement could be made – these form part of my personal evaluation and lead to any application for promotion on my part. These evaluations are conducted if:
· A new course is being delivered
· A lecturer is new to delivering an existing course
· Self-selection by lecturer (two courses per semester are mandatory)
Why is quality important in elearning?
I think quality is important in all learning. Is eLearning any different?
One significant factor is the cost effectiveness/ROI (as the commitment of resources will usually be higher than conventional methods) so an institution needs to be assured of the effectiveness over other methods of delivery.
eLearning feedback from the students is constrained by the lack of F2F context – if a class group is dissatisfied with their learning delivery then the feedback is fairly immediate (not many turn up to the next lecture). Email contact with a dissatisfied student is flinging a ball into the void!
eLearning – we are still learning the level of effectiveness – anecdotally it seems that there is a reduced completion rate in comparison to F2F – so an increased emphasis on a quality environment is an attempt to improve these statistics.
I guess it’s a comfort blanket. If a course is evaluated it validates my delivery and give me confidence as a teacher in my materials and assessment instruments.
I would define evaluation in my context at Manukau Institute of Tech as;
· The course evaluation taken by all students at completion
· Pre and post moderation of assessments – this gives me a peer review
· Usability of my teaching methods – am I setting up barriers to learners
· If team teaching feedback from members of my team in development of teaching resources, delivery methods and assessments
· Feedback by students during the course
· Retention of students during the course
· My marking of student assessments – gives me feedback on teaching and course resources
· Student results
Methods used that are familiar
Observation – formative teaching review done at my request by a peer
Questionnaires – completed by students – partly in a checklist form with a range over 5 levels (poor – excellent) supported by comment area where students are prompted to respond to areas they particularly found helpful and areas where they felt improvement could be made – these form part of my personal evaluation and lead to any application for promotion on my part. These evaluations are conducted if:
· A new course is being delivered
· A lecturer is new to delivering an existing course
· Self-selection by lecturer (two courses per semester are mandatory)
Why is quality important in elearning?
I think quality is important in all learning. Is eLearning any different?
One significant factor is the cost effectiveness/ROI (as the commitment of resources will usually be higher than conventional methods) so an institution needs to be assured of the effectiveness over other methods of delivery.
eLearning feedback from the students is constrained by the lack of F2F context – if a class group is dissatisfied with their learning delivery then the feedback is fairly immediate (not many turn up to the next lecture). Email contact with a dissatisfied student is flinging a ball into the void!
eLearning – we are still learning the level of effectiveness – anecdotally it seems that there is a reduced completion rate in comparison to F2F – so an increased emphasis on a quality environment is an attempt to improve these statistics.
Thursday, March 13, 2008
Lots of reading
Hi everyone,
I really enjoyed Bronwyn's video (although somehow I could not turn up the volume - so had to lean very close to my screen) quite an interesting experience. Have gone though most of this weeks work - by this I mean I printed and/or saved lots of readings. Being a reflective learner this means that I won't be sleeping anytime soon.... One point that really came through to me is that designing the appropriate plan is paramount - after all you only get what you ask. Methodology should be appropriate and the triangulation concept makes lots of sense. I also think that the plan should have the ability to be modified to meet changing situations. After say, a focus group panns the learning module - then go back to stage 1 to discover if it's the methodology at fault, or the course being evaluated. This is off the top of my hat - so it's away to read.
Have a good weekend
Jennifer
I really enjoyed Bronwyn's video (although somehow I could not turn up the volume - so had to lean very close to my screen) quite an interesting experience. Have gone though most of this weeks work - by this I mean I printed and/or saved lots of readings. Being a reflective learner this means that I won't be sleeping anytime soon.... One point that really came through to me is that designing the appropriate plan is paramount - after all you only get what you ask. Methodology should be appropriate and the triangulation concept makes lots of sense. I also think that the plan should have the ability to be modified to meet changing situations. After say, a focus group panns the learning module - then go back to stage 1 to discover if it's the methodology at fault, or the course being evaluated. This is off the top of my hat - so it's away to read.
Have a good weekend
Jennifer
Sunday, March 9, 2008
And so to bed
Hi
it's been a long day - but at last have made some progress on this course. Looking forward to spending some time reflecting (yes I'm one of those) on all the media and links that are provided. Started to look at Bronwyn's video - but I'm afraid I'm too sleepy to appreciate this at the moment - also need to figure out how to turn up the volume on my laptop. Bronwyn I tried to add a photo to this blog - but after 1/2 hour gave up as it seemed to take forever to upload. Any tips here?
Sweet Dreams
Jennifer
it's been a long day - but at last have made some progress on this course. Looking forward to spending some time reflecting (yes I'm one of those) on all the media and links that are provided. Started to look at Bronwyn's video - but I'm afraid I'm too sleepy to appreciate this at the moment - also need to figure out how to turn up the volume on my laptop. Bronwyn I tried to add a photo to this blog - but after 1/2 hour gave up as it seemed to take forever to upload. Any tips here?
Sweet Dreams
Jennifer
Subscribe to:
Posts (Atom)