Saturday, March 29, 2008

Week 4 - wow this was a hard one!

Week 3 post-script
Checked on Manukau Institute Policy re – guidelines. Historically we are not seen as a distance/eLearning based institute. Hence have not adopted the NZ ELG. Current research ongoing into ‘how our eLearning supports current student needs in a flexible delivery environment’. Suspect that with the contraction of distance learning (re Canterbury Poly scandal before last election) and the repercussions for distance learning being delivered by Polys in NZ we are being very careful. TEAC current policy is that we ‘stick to our own catchment area’ eLearning does not comply!

Notes for Blog – Week 4 – Educational Paradigms

Paradigm 1 - Analytic-Empirical-Positivist-Quantitative Paradigm
Impossible to separate parts from wholes – cause and effect relationships too complex to measure in a scientific manner. This method was used by the University of Michigan (U-M) School of Dentistry when they applied two of Flagg’s formative evaluation measures (Johnson, Brittain, Glowacki & Van Ittersum). In Pilot 1 – Media format, lends itself to this approach. Student preferences could be easily identified using their logs and student surveys. The technological basis of the question lends itself to this quantitative approach.
I find it really difficult to ‘get my head around’ the application of this paradigm in an all-encompassing evaluation methodology. Firstly I have to admit that my personal attitudes play a role in my philosophical attitude. At this stage I could say ‘I rest my case’ – evaluator bias! Using a ‘scientific’ approach surely requires control groups, against which those sampled can be measured. Hegarty (2003) identifies the ethical dilemma in providing a group with different learning tools and running a control to measure effectiveness. Hegarty (2003) also references the possibility of subject bias when conducting this evaluation using self-selected groups.

The purpose of the research is to primarily identify the preferred delivery technology. Without a greater sample survey beyond this self-selected and specialist group can these results be applied to a more general audience? ‘Students prefer pod-casts as a supplementary learning tool.’ Does the application of this paradigm to an evaluation methodology limit the reference field?

Paradigm 2 – Constructivist-Heremeneutic-Interpretivist-Qualitative Paradigm
I use this evaluation method every time I stand in a classroom, or in an elearning context, when I give feedback to a student. My query with expanding this to the level of evaluation in a group situation is that it is impossible! Surely we all carry our context with us? How is it possible to generalise? An example is student evaluation of lecturing staff – a regular analysis on our teaching. The outlier is a relevant sample. I agree with the general tenor of this philosophy but cannot reconcile how this can be used as a realistic evaluation tool.

Paradigm 3 – Critical Theory-Neomarxist-Postmodern-Praxis Paradigm
I find this approach very seductive. Especially relevant when teaching multi-cultural an socially diverse groups where such diversity is an every-day issue. One contention I have with course design is the impossibility designing a neutral course – one example is perhaps to evaluate the Microsoft Helpline. I would be really interested in comments on their offerings being evaluation using this paradigm. As a deliverer of an eLearning program on a world-wide basis they would surely need to consider this paradigm. Effectively they do not contextualise (beyond reference back to the program) any teaching. I wonder if this is the Neomarxist approach in action – would they be horrified to consider this?

Paradigm 4 – Eclectic-Mixed Methods-Pragmatic Paradigm
Reeves’ bias towards ‘cherry picking’ is obvious throughout this article and the concluding paradigm comes as no surprise. ‘Horses for courses’ is his mantra. The triangulation approach to educational evaluation is a common theme throughout much analysis. An example is the 360◦ approach used in management/human resource surveys – see evaluation from all angles.

Comparison
A brief flick through the Types of Evaluation Models on the web proved a frustrating experience as so many articles are subscriber only. Most models refer to entire programs rather than discrete learning tools, which is my focus, so application required a mind-shift on my part. Another feature of the articles freely available is, perhaps because many are from the United States, the emphasis on ROI, rather than the student-centred approach focus of the ELG guidelines.

Payne (2004) in Johnson’s (2008) lecture on Evaluation Models has a moment of insight where he makes the analogy – ‘models are to paradigms as hypotheses are to theories’. Johnson’s (2008) lecture notes place Patton’s Model under the heading ‘Management Models’ and outline Patton’s (1997) emphasis on the utility of evaluation findings in the context of program design with a focus on key stakeholders key issues and the need for evaluators to ‘work closely with intended users’.
Comment: I guess from my perspective this is a no-brainer. How could one evaluate anything without involving the customer?

Under the heading ‘Anthropological Models’ Johnson (2008)also gives a delightful insight to Qualitative Methods
Tend to be useful for describing program implementation, studying process, studying participation, getting program participant’s views or opinions about program impact… identifying program strengths and weaknesses’
More significantly Johnson (2008) highlights the utility of this method, over specific objective research in discovering ‘unintended outcomes’.

I must admit to some confusion when I read Lance Hogan’s (2007) commentary on Management Orientated approach. This certainly does not correspond with Johnson’s (2008) typology! Lance Hogan (2007) provided some interesting critiques in his literature review. His ‘Participant-Oriented Approach’, where the evaluator engages with the stakeholder as a problem-solving partner seems logical.
Comment: I am assuming that he means the students (participants) evaluator and designers.

On a lighter note – and relevant to my learning tool the article by Colombia University (2008) emphasised the faculty and students working together to produce a mutually satisfying learning experience through a heuristics review using expert users and a breakdown of several layers of student review; formative, effectiveness, impact and maintenance. It was easy to understand and I guess follows Stakes Responsiveness Model (look for your comment on this Browyn). In particular the feedback section was particularly instructive. I would assume that this has used a triangulation method of gaining the data.

This was a toughie! I felt more confused – although it did open my mind up – at the end! Hopefully Evaluation Methods is more cut and dried and appeals to my linear nature

Cheers

Jennifer

References

Brittain, S., Glowacki, P., Vam Ittersum, J., Johnson, L. (2006). Formative evaluation strategies helped identify a solution to a learning dilemma. Retrieved March, 14, 2008, from http://connect.educase.edu/Library/EDUCASE+Quarterly/PodcastingLectures/39987

Flagg, B. N.,Formative Evaluation for Education Technologies. (Hillsdale, N.J.: Erlbaum Associates, 1990).

Hegarty, B., (2003). Experimental and Multiple Methods Evaluation Models. Retrieved March, 25, 2008, from
http://wikieducator.org/Evaluation)_of_eLearning_for_Best_Practice

Johnson, B., (2008) Lecture Two: Evaluation Models. Retrieved, March 26, from www.southalabama.edu/coe/bset/johnson/660lectures/lec2doc.

Lance Hogan, R. (2007). The Historical Development of Program Evaluation: Exploring the Past and Present. Online Journal of Workforce Education and Development. Retrieved March, 28, 2008 from
http://wed.siu.edu/Journal/VolIInum4/Article_4.pdf

Six Facets of Instructional Product Evaluation. Retrieved March, 27, 2008 from http://ccnmtl.colombia.edu/seminars/reeves/CCNMTLFormative.ppt

5 comments:

Bronwyn hegarty said...

Jennifer this is a fantastic post! You are well on your way now.

If you wish to debate further the beauty and usefulness of the Constructivist-Heremeneutic-Interpretivist-Qualitative Paradigm check out Rika's blog - for example, "Evaluating from this perspective i would immerse myself in the world of the subject and try and understand how they had learnt and how the course had affected their construction and interpretation of reality".

You make an excellent point about the Microsoft Helpline. It will be interesting to see if others agree. Microsoft would be horrified but it exactly that - 'create how you think it should be and the masses will come'.

Your example about the "360◦ approach used in management/human resource surveys" is a goodie for the Eclectic-Mixed Methods-Pragmatic Paradigm. Possibly my prejudice against it has blocked it out previously as a good example of this.

Good on you trawling through the information on models. I will have to get back to you about "Colombia University (2008)" model and the relevance of "Stakes Responsiveness Model". As long as they collected both qualitative and quantitative data - it would fit the model. I will have to look more closely at the article and the methodology they used.

which paradigm and model are you thinking will suit your evaluation project?

Bronwyn hegarty said...

jennifer can you please clarify where the "the article by Colombia University (2008)" is located please. do you mean the link to columbia university - six facets of evaluation presentation by Reeves?

Gordon said...

Great post Jen.

I really liked the quote about models being to paradigms what hypotheses are to theories. It is about using general principles to create something useful (tangible?). I have posted elsewhere about the themes that have been recurrent between the paradigms and the models.

Interesting to compare our contrasting views on the paradigms. Being from a scientific background I can identify with 1, and I don't like 2 and 3, generally. I prefer 4 overall because of the 'pragmatic'. You say you have a linear nature yet you seem to like the anthropological paradigm 2 and the sceptical paradigm 3, both of which would seem to suit a 'star burst' approach, i.e. multiple, parallel lines of inquiry and posibilities. Perhaps you are linear like the multiple spokes of a wheel??

Sue said...

Appreciated reading your post, Jennifer.
I too found this one a tad hard to get my head around.
It's great to read your perspective and the comments posted.

Donna said...

Hi Jennifer
This was a hard week wasn't it? You have done a fantastic job of getting your round it.