Saturday, April 26, 2008

Catching up - weeks 5 and 6

Hi
what a lot of reading!!! Here's my results - lots of food for thought.
Type of Evaluation for my project
Background
The eLearning module under scrutiny is in the management discipline. The course is at level 4 (NZQA), but there are few texts, and no interactive tools available (both locally and internationally) pitched at this level. The intention is to create a learning tool to enable students to:
a) Understand the formation and development organisational structures
b) Be able to apply organisational structures from the learning tool to a range of existing organisations within the New Zealand context. Students would be able to use an interactive model –students could manipulate structures to meet a range of criteria after viewing the introductory models.
c) Feedback is available to students on the veracity of their answers.
d) Multiple attempts are available.
e) Students with Internet access could use this learning tool outside the Polytechnic environment.
The intention is to deliver this learning module as part of a Web enhanced – integration into a course currently delivered f2f, but in future possibility of blended/distance learning course. Also this project can be used in a number of introductory management courses. http://elg.massey.ac.nz. eLearning guidelines for New Zealand. 25.03.08

The purpose of the feedback from students and teachers is to enable this learning tool to be developed so it is:
· User friendly to students
· Effective as a learning tool
· Enables students to transfer theoretical knowledge to a range of real life situations.

Needs analysis
I have already conducted this phase of the project in several stages:
1. Students manipulated a card system to produce appropriate organisational models after f2f teaching on this topic
2. Students viewed (and had continuous access to) a PowerPoint presentation on the standard range of organisational structures with supplementary notes to reinforce transition phases and rationale for restructuring.
These learning tools have been enthusiastically received by the students ans have a positive effect on their learning outcomes evidenced by:
· Anonymous feedback forms with open-ended questions
· Student results in summative evaluation essays compared to students in earlier courses who did not have access to Phase 1 (the cards) or Phase 2 (the PowerPoint).

These methods were conducted prior to this evaluation course and therefore do not have the rigour that could now be applied. A critique of these evaluation methods indicated that the results are both anecdotal and unscientific. One issue I have is that the course is currently delivered to a small cohort (6 – 12 students). I would appreciate advice on this conundrum.

Methods of Evaluation
Initially the evaluation would be formative– involve gathering feedback from users and other relevant groups during development and implementation process – so improvements and adjustments can be used (www.wikieducator.org). The rationale being that this learning tool is in the development stage. I am inclined to use three methods which are conducted in stages. Although the comment by Stake (in Lockee, More & Burton) when formative vs summative is discussed may change my concept making soup: when the chef tastes the soup it’s formative; when the diners (or a food critic) taste the soup it’s summative may well change my mind!

Method 1
Once the learning tool is developed by our in-house designers an Expert review is conducted by other lecturers in the management field to test for content relevancy and accuracy (Lockee, B., Moore, M., & Burton, J.: 2002). At this stage I need to research in more detail for an appropriate model but I am thinking of an open-ended questionnaire.
Method 2
A field trial using the student group on the completed package (Lockee, B., Moore, M., & Burton, J.: 2002). This would be based on the Kirkpatrick’s Model for Summative evaluation (although this is summative I really think it is a good evaluation model). The idea is to test the effectiveness (from the learner’s perspective) of the knowledge transfer. This would be conducted at the end of the initial session by students by inclusion of a feedback form. Integration at this stage would increase the likelihood of a student response. The form would include a Likert Scale analysis (followed by an opportunity for student comment in an open-ended format). Involving the whole class in the survey would not be an issue for analysis due to the small number of students currently studying this unit.
Method 3
Based on the Observation Sessions used by the Analysis and Evaluation Group (2005). My rationale is that this learning tool is a new approach to the management topic and uses quite complex manipulation. It is essential that the learning tool is user friendly. This method will also allow an element of peer review (from the teacher’s perspective). Jones in Harvey (1998) cautions on the Role of the Observer and provides a number of very relevant guidelines for me to consider when conducting this stage.

By adopting these multiple methods I hope to emulate the Eclectic-Mixed Methods-Pragmatic Paradigm. For example, usability would be measured by the students in their feedback and also by the lecturer observation (Hegarty: 2003).


References
Analysis and Evaluation Group (2005). Evaluation Plan for Usability Testing of Module Prototypes. Retrieved March 24, 208 from www.wikieducator.org

e-Learning guidelines for New Zealand. B. Who will use the e-Learning guidelines? Retrieved March 22, 2008 from http://elg.massey.ac.nz

Lockee, B., Moore, M., Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, Number 1, 2002.

Hergarty, B. (2008). Types of Evaluation Models. Otago University.

Jones, C. (1998) Evaluation Cookbook. Harvey, J. Ethnography. (pp34-35) Learning Technology Dissemination Initiative. Scottish Higher Education Funding Council. Retrieved April 15, 2008 from www.icbl.hw.ac.uk/ltdi/cookbook

WikiEd. Evaluating the impacts of eLearning/Evaluation methods. Retrieved April 20, 2008 from www.wikieducator.org/WikiEdProfessional_eLearning_Guidebook/Evaluating

Summary

Evaluation Plan for Usability Testing of Module Prototypes – Prepared by Analysis and Evaluation Group December 2005
The project is to develop interactive information literacy modules for the tertiary sector in New Zealand. The needs analysis clearly shows a gap in available tools to address both student and lecturer requirements. The purpose of the modules is to allow a diverse range of students to readily access and use these modules to enhance their learning outcomes. The objectives are to test the usability and design of the modules is appropriate for the intended learners.

In order to address the issue of validity three institutions participated in the initial study. It was envisaged that the initial study would also be rolled out over other institutions in the tertiary sector.

The study clearly identified the areas of significance to be tested:
· User reaction
· Engagement with learners
· Appropriateness for the New Zealand tertiary sector
· Ease of updating and re-contextualising

The multiple method design employed here is rigorous. Both student and expert feedback are sought. The range of data collection is comprehensive; questionnaire, interview, usability observation and expert review. The triangulation of data (Hegarty, 2008) with the pragmatic approach of the Eclectic-Mixed Methods orientation (Reeves, 2006) should result in an accurate review. Combinations of formative and summative evaluations are conducted with both qualitative and quantitative methods employed.

A particularly interesting aspect of the evaluation is the methodology employed in the Usability questionnaire for both students and staff. Both staff and students are requested to ‘voice’ their reactions as they ‘test’ the modules. This is recorded and is in addition to the written questionnaire. Although the rationale for this method is ‘to ensure the observer doesn’t miss anything’ this would add another dimension to the testing. It is not clear if these recordings are attributable, if software such as CODE-A-TEXT (McAteer in Harvey, 1998) is used.

The pre-testing information sheets provide excellent information to the participants covering such topics as instruction, time-line and confidentiality. The content of the questionnaires are divided into three sections; usability, instructional design, content and effectiveness for learning.

The design of this evaluation should be effective in achieving its purpose.

References
Hergarty, B. (2008). Types of Evaluation Models. Otago University.

Reeves, T. (2006) Educational paradigms. Retrieved from cache February 5, 2006 www.educationau.edu.au/archives/CP/REFS/reeves_paradigms.htm

McAteer, E. (1998) Evaluation Cookbook. Harvey, J. Interviews. (pp40-43) Learning Technology Dissemination Initiative. Scottish Higher Education Funding Council. Retrieved April 15, 2008 from www.icbl.hw.ac.uk/ltdi/cookbook


Relevance to my project
As I intend to use a multiple method evaluation method for my project the above design is relevant. I particularly found the observation section very interesting, although I doubt that recording the sessions are possible in my context. I would consult with my IT experts on this! The triangulation concept is definitely one I shall apply as I need expert advice on design, feedback on subject content and then the student reaction to the actual learning tool. I really like the questionnaire design – it seems to cover all bases - and would consider using these questions. This is a really useful model for me to follow – I would like to see the results of this evaluation – is it available??

Cheers

Jennifer

1 comment:

Bronwyn hegarty said...

A great catch up post Jennifer! :)

I am a tad confused though and would appreciate some clarification. Am I right you will be undertaking a formative evaluation using:
1. expert review (this is good) of the prototype (the learning tool before you give access to students; 2. Kirkpatrick’s Model for Summative evaluation - once students have used the learning tool.
3. peer review using observation - to get the teachers' perspectives.

"formative– involve gathering feedback from users and other relevant groups during development and implementation process "

Using Kirkpatrick's model could be tricky - level one measures reactions from the users so this aspect is ok, but because it is done at the end of the use of the learning tool - you will be missing out a whole valuable opportunity. That is usability - which can be done before students obtain access - it is always a good idea to get some students and teachers to test a new product before it goes "live" to check the design is usable.

Once you go "live" with the first iteration of the learning tool - gathering feedback as you go and at the end of the trial is also formative - so you could adapt kirkpatrick's model to fit this. Yo do need to decide if you are evaluating usability of the product - learning design, navigation, effectiveness for learning etc. - it is about the users opinions and perceptions about how the design has helped them learn NOT about what they have learned - and can be included here.

BUT you will not be able to measure against levels 3 & 4 as these are definitely summative. I am also dubious about evaluating against level 2 as this is about measuring their learning - achievement against learning objectives, attitudes, skills, knowledge and how this has changed - rather than how the design helps learning - I hope you can see the difference.

Now I will start at the end and work back if that is okay.

The needs analysis you have already done could be very useful, for example, you could mention this in the background of your plan.

your conundrum about the size of the cohort - 6 to 12 students. This small number is fine esp if you gather qualitative and quantitative data as part of a mixed-methods approach. in a qualitative evaluation you need to keep the numbers manageable.

I also suggest you indicate all the suggested methods in your plan but choose only two to carry out otherwise for this project - it could be too big.

The report of the evaluation you mention can be found at: http://oil.otago.ac.nz in the section on evaluation.

so you need to decide - is your formative evaluation going to be about the usability of the new learning tool - expert review and student feedback, or will you conduct a summative evaluation and look at the students reactions (level 1) and measure what they have learned (level 2) - kirkpatrick's model?

I am looking forward to a discussion of your ideas.