Saturday, March 29, 2008

Week 4 - wow this was a hard one!

Week 3 post-script
Checked on Manukau Institute Policy re – guidelines. Historically we are not seen as a distance/eLearning based institute. Hence have not adopted the NZ ELG. Current research ongoing into ‘how our eLearning supports current student needs in a flexible delivery environment’. Suspect that with the contraction of distance learning (re Canterbury Poly scandal before last election) and the repercussions for distance learning being delivered by Polys in NZ we are being very careful. TEAC current policy is that we ‘stick to our own catchment area’ eLearning does not comply!

Notes for Blog – Week 4 – Educational Paradigms

Paradigm 1 - Analytic-Empirical-Positivist-Quantitative Paradigm
Impossible to separate parts from wholes – cause and effect relationships too complex to measure in a scientific manner. This method was used by the University of Michigan (U-M) School of Dentistry when they applied two of Flagg’s formative evaluation measures (Johnson, Brittain, Glowacki & Van Ittersum). In Pilot 1 – Media format, lends itself to this approach. Student preferences could be easily identified using their logs and student surveys. The technological basis of the question lends itself to this quantitative approach.
I find it really difficult to ‘get my head around’ the application of this paradigm in an all-encompassing evaluation methodology. Firstly I have to admit that my personal attitudes play a role in my philosophical attitude. At this stage I could say ‘I rest my case’ – evaluator bias! Using a ‘scientific’ approach surely requires control groups, against which those sampled can be measured. Hegarty (2003) identifies the ethical dilemma in providing a group with different learning tools and running a control to measure effectiveness. Hegarty (2003) also references the possibility of subject bias when conducting this evaluation using self-selected groups.

The purpose of the research is to primarily identify the preferred delivery technology. Without a greater sample survey beyond this self-selected and specialist group can these results be applied to a more general audience? ‘Students prefer pod-casts as a supplementary learning tool.’ Does the application of this paradigm to an evaluation methodology limit the reference field?

Paradigm 2 – Constructivist-Heremeneutic-Interpretivist-Qualitative Paradigm
I use this evaluation method every time I stand in a classroom, or in an elearning context, when I give feedback to a student. My query with expanding this to the level of evaluation in a group situation is that it is impossible! Surely we all carry our context with us? How is it possible to generalise? An example is student evaluation of lecturing staff – a regular analysis on our teaching. The outlier is a relevant sample. I agree with the general tenor of this philosophy but cannot reconcile how this can be used as a realistic evaluation tool.

Paradigm 3 – Critical Theory-Neomarxist-Postmodern-Praxis Paradigm
I find this approach very seductive. Especially relevant when teaching multi-cultural an socially diverse groups where such diversity is an every-day issue. One contention I have with course design is the impossibility designing a neutral course – one example is perhaps to evaluate the Microsoft Helpline. I would be really interested in comments on their offerings being evaluation using this paradigm. As a deliverer of an eLearning program on a world-wide basis they would surely need to consider this paradigm. Effectively they do not contextualise (beyond reference back to the program) any teaching. I wonder if this is the Neomarxist approach in action – would they be horrified to consider this?

Paradigm 4 – Eclectic-Mixed Methods-Pragmatic Paradigm
Reeves’ bias towards ‘cherry picking’ is obvious throughout this article and the concluding paradigm comes as no surprise. ‘Horses for courses’ is his mantra. The triangulation approach to educational evaluation is a common theme throughout much analysis. An example is the 360◦ approach used in management/human resource surveys – see evaluation from all angles.

Comparison
A brief flick through the Types of Evaluation Models on the web proved a frustrating experience as so many articles are subscriber only. Most models refer to entire programs rather than discrete learning tools, which is my focus, so application required a mind-shift on my part. Another feature of the articles freely available is, perhaps because many are from the United States, the emphasis on ROI, rather than the student-centred approach focus of the ELG guidelines.

Payne (2004) in Johnson’s (2008) lecture on Evaluation Models has a moment of insight where he makes the analogy – ‘models are to paradigms as hypotheses are to theories’. Johnson’s (2008) lecture notes place Patton’s Model under the heading ‘Management Models’ and outline Patton’s (1997) emphasis on the utility of evaluation findings in the context of program design with a focus on key stakeholders key issues and the need for evaluators to ‘work closely with intended users’.
Comment: I guess from my perspective this is a no-brainer. How could one evaluate anything without involving the customer?

Under the heading ‘Anthropological Models’ Johnson (2008)also gives a delightful insight to Qualitative Methods
Tend to be useful for describing program implementation, studying process, studying participation, getting program participant’s views or opinions about program impact… identifying program strengths and weaknesses’
More significantly Johnson (2008) highlights the utility of this method, over specific objective research in discovering ‘unintended outcomes’.

I must admit to some confusion when I read Lance Hogan’s (2007) commentary on Management Orientated approach. This certainly does not correspond with Johnson’s (2008) typology! Lance Hogan (2007) provided some interesting critiques in his literature review. His ‘Participant-Oriented Approach’, where the evaluator engages with the stakeholder as a problem-solving partner seems logical.
Comment: I am assuming that he means the students (participants) evaluator and designers.

On a lighter note – and relevant to my learning tool the article by Colombia University (2008) emphasised the faculty and students working together to produce a mutually satisfying learning experience through a heuristics review using expert users and a breakdown of several layers of student review; formative, effectiveness, impact and maintenance. It was easy to understand and I guess follows Stakes Responsiveness Model (look for your comment on this Browyn). In particular the feedback section was particularly instructive. I would assume that this has used a triangulation method of gaining the data.

This was a toughie! I felt more confused – although it did open my mind up – at the end! Hopefully Evaluation Methods is more cut and dried and appeals to my linear nature

Cheers

Jennifer

References

Brittain, S., Glowacki, P., Vam Ittersum, J., Johnson, L. (2006). Formative evaluation strategies helped identify a solution to a learning dilemma. Retrieved March, 14, 2008, from http://connect.educase.edu/Library/EDUCASE+Quarterly/PodcastingLectures/39987

Flagg, B. N.,Formative Evaluation for Education Technologies. (Hillsdale, N.J.: Erlbaum Associates, 1990).

Hegarty, B., (2003). Experimental and Multiple Methods Evaluation Models. Retrieved March, 25, 2008, from
http://wikieducator.org/Evaluation)_of_eLearning_for_Best_Practice

Johnson, B., (2008) Lecture Two: Evaluation Models. Retrieved, March 26, from www.southalabama.edu/coe/bset/johnson/660lectures/lec2doc.

Lance Hogan, R. (2007). The Historical Development of Program Evaluation: Exploring the Past and Present. Online Journal of Workforce Education and Development. Retrieved March, 28, 2008 from
http://wed.siu.edu/Journal/VolIInum4/Article_4.pdf

Six Facets of Instructional Product Evaluation. Retrieved March, 27, 2008 from http://ccnmtl.colombia.edu/seminars/reeves/CCNMTLFormative.ppt

Monday, March 24, 2008

Week 3 - still plugging on

http://elg.massey.ac.nz contends that to successfully implement e-learning guidelines this should be at an institution-wide level. I’m not sure (but will find out) if MIT has adopted specific guidelines re e-learning. As in this course I am asked to identify e-learning guidelines specific to my project I will continue on this track. Obviously if my institution has guidelines then my selection should be congruent with the global policy, but at this stage am boxing in the dark (Easter Holiday catch-up here).

Scenario: Unit Standard 16342 – Organisational Principles – Level 4
Students are required to demonstrate an understanding of Organisational Charts, their different types and the advantages and disadvantages to an organisation of applying these types.

Stage 1
Students view a presentation giving them basic information about different organisational structures – this is progressive from simple to complex structures
Stage 2
Students are given scenarios (based on above) and manipulate structures to form a suitable organisational chart for each structure. Feedback would be available to verify correct structures. Students would be given unlimited opportunities to trial their various structures.
Stage 3
Peer review – students discuss (online) their rationale for each structure chosen and the advantages and disadvantages to an organisation for their choice

At this stage this is envisioned as a formative exercise – but after trial period could transit to a summative evaluation as it meets the Unit Standard requirements.

My selected guidelines
1.1
TD1 – have ‘trialled’ this classroom. Found, when teaching this F2F, students have difficulty in understand the significance to an organisation of an inappropriate structure and also the transition (whether reactive/organic or planned restructure) from one type of structure to another. In my trial I initially used volunteers (with enthusiastic input from observing students) and groups to use whiteboard. I then designed a presentation (Stage 1) which students find useful as an initial learning tool. My initial research indicates that the intended learning outcomes would be achieved with a well-designed e-learning tool. At this introductory level in the Management discipline there are few resources available. The level is comparable to A’ level in the UK, and I have searched unsuccessfully for resources specific to this topic. Most resources are for Stage 1 undergraduate level which is far too complex. – (TD13)

TD3 – by placing the three stages on our eMIT site students would have the opportunity of exploring (at their own pace) the learning tool. Feedback on their submitted structure would enable them to revisit any inappropriate structure (TT 3).

A discussion board this give the less experienced student the support to enable deeper learning to take place. At this stage it is not planned (by MIT) to make this course web-based or enhanced – just web supported. Discussion on results would therefore be F2F. But I envisage that this could transit to a web-based course and therefore TD10 would be a possibility that I should consider when designing/deploying this Learning Tool.

MD 3 – I have already consulted our e-learning team and their input has been invaluable to me in adjusting the design and realising the possibilities of using an e-learning instrument for this discrete learning outcome.

Outside of the ELG I also found ‘Kirkpatrick’s Learning and Training Evaluation Theory’ http://www.e-learningguru.com/articles/art2_8.htm particularly interesting. The article (above) discusses its value in technology-based training. In particular the ‘Behaviour in the Workplace’ section – had transfer of knowledge/skills occurred? Although not directly related to my learning outcomes this is very relevant to many courses conducted in a Polytechnic environment where this transfer of knowledge is vital to the continued success of our courses.

Sunday, March 16, 2008

Week 2 post - slowly catching up!!

Importance of evaluation to me

I guess it’s a comfort blanket. If a course is evaluated it validates my delivery and give me confidence as a teacher in my materials and assessment instruments.

I would define evaluation in my context at Manukau Institute of Tech as;
· The course evaluation taken by all students at completion
· Pre and post moderation of assessments – this gives me a peer review
· Usability of my teaching methods – am I setting up barriers to learners
· If team teaching feedback from members of my team in development of teaching resources, delivery methods and assessments
· Feedback by students during the course
· Retention of students during the course
· My marking of student assessments – gives me feedback on teaching and course resources
· Student results

Methods used that are familiar
Observation – formative teaching review done at my request by a peer
Questionnaires – completed by students – partly in a checklist form with a range over 5 levels (poor – excellent) supported by comment area where students are prompted to respond to areas they particularly found helpful and areas where they felt improvement could be made – these form part of my personal evaluation and lead to any application for promotion on my part. These evaluations are conducted if:
· A new course is being delivered
· A lecturer is new to delivering an existing course
· Self-selection by lecturer (two courses per semester are mandatory)

Why is quality important in elearning?
I think quality is important in all learning. Is eLearning any different?
One significant factor is the cost effectiveness/ROI (as the commitment of resources will usually be higher than conventional methods) so an institution needs to be assured of the effectiveness over other methods of delivery.
eLearning feedback from the students is constrained by the lack of F2F context – if a class group is dissatisfied with their learning delivery then the feedback is fairly immediate (not many turn up to the next lecture). Email contact with a dissatisfied student is flinging a ball into the void!
eLearning – we are still learning the level of effectiveness – anecdotally it seems that there is a reduced completion rate in comparison to F2F – so an increased emphasis on a quality environment is an attempt to improve these statistics.

Thursday, March 13, 2008

Lots of reading

Hi everyone,

I really enjoyed Bronwyn's video (although somehow I could not turn up the volume - so had to lean very close to my screen) quite an interesting experience. Have gone though most of this weeks work - by this I mean I printed and/or saved lots of readings. Being a reflective learner this means that I won't be sleeping anytime soon.... One point that really came through to me is that designing the appropriate plan is paramount - after all you only get what you ask. Methodology should be appropriate and the triangulation concept makes lots of sense. I also think that the plan should have the ability to be modified to meet changing situations. After say, a focus group panns the learning module - then go back to stage 1 to discover if it's the methodology at fault, or the course being evaluated. This is off the top of my hat - so it's away to read.

Have a good weekend
Jennifer

Sunday, March 9, 2008

And so to bed

Hi
it's been a long day - but at last have made some progress on this course. Looking forward to spending some time reflecting (yes I'm one of those) on all the media and links that are provided. Started to look at Bronwyn's video - but I'm afraid I'm too sleepy to appreciate this at the moment - also need to figure out how to turn up the volume on my laptop. Bronwyn I tried to add a photo to this blog - but after 1/2 hour gave up as it seemed to take forever to upload. Any tips here?

Sweet Dreams
Jennifer