Four Traps in Evaluation

Objective

This is a reflection on the four traps in evaluation as contemplated by Fenwick and Parsons (2009), which are noted as:  Measuring what’s easiest to measure, underestimating the learning embedded in evaluation, unexamined power, and reductionism (p. 9-11).  

Reflective

These four traps in evaluation really stood out to me.  In large part it’s because they gave me an opportunity to take a long, hard look in the mirror.  Unfortunately, I didn’t like what I saw.  For instance, measuring what’s easiest to measure is a trap that I fell into as recently as last week.

In a course that I’m currently teaching, one of the learning objectives requires students to explain how the degree of causality is measured by epidemiological studies.  This was one of thirteen learning objectives under the general principles of toxicology and in hindsight a pretty minor part of the module.  However, as I reviewed the textbook while preparing the first test for this course, I noticed the ten factors of causality outlined quite nicely and thought, “this would make a really nice matching question.”  The other thing I liked about that idea was how easy it is to grade matching questions – just quickly check a pre-determined list of letters.  Looking back on that after reading the four traps in evaluation, I really questioned myself about that decision.  Was their ability to match those terms and statements really worth ten out of sixty marks on the first test?  Did that even evaluate the learning objective?  Was I just measuring what was easy to measure?  I didn’t like my honest answers to those questions and I didn’t feel particularly good about it.

Further, I really connected with the second trap: underestimating the learning embedded in evaluation.  It was interesting to read about learners being highly susceptible to learning during evaluations.  I had never thought of an evaluation as a learning tool – only ever as something to test the students.  Thinking about an evaluation as a learning tool that helps cement the learners’ knowledge and as something that learners remember about the course really shifted my perspective on how powerful evaluations can be.  But they have to be prepared properly and my hope is that I don’t fall into any more of these traps.    

Interpretive

There is a long road to travel before I start preparing effective evaluations.  Perhaps having a method or process for creating effective evaluations is a way of avoiding the four traps.  I started looking into methods and processes, and another term started popping up quite frequently and really caught my eye: authentic.  Even in the initial stages of my investigation, I started to question myself in an even deeper way.  Questions like, “was that matching question the right way of evaluating that learning objective?” became, “was this type of pen-to-paper test even the right way to evaluate the students?”

Ashford-Rowe, Herrington and Brown (2103) established the critical elements that determine authentic assessment with the purpose of formulating an effective model for task design.  The eight critical elements of authentic assessment noted by the study were: it should be challenging; the outcome should be a performance or product; it should ensure transfer of knowledge; metacognition should be a component; a requirement to ensure accuracy should be present; the environment and the tools used to deliver the task should be real; an opportunity to discuss and provide feedback should be provided; and collaboration should be a component.  They concluded that students responded well to tasks that had been designed to incorporate these critical elements of authentic assessment, as a result of “a clear understanding of the ultimate workplace benefits of having to produce authentic outcomes within authentic environments, with the use of authentic tools as part of the learning assessment” (p. 220).

Five curriculum design principles were used by Meyers and Nulty (2009) to align authentic learning environments and assessments with third-year undergraduate students in environmental and ecological studies.  The curriculum design principles that they incorporated in their teaching and learning materials, tasks and experiences were:  authentic, real-world relevance; constructive, sequential, and interlinked; progressively higher order cognitive processes; alignment; and challenging.  What they created were, “learning environments that consisted of a broad range of learning resources and activities which were structured and sequenced with an integrated assessment strategy” (p. 565).  They concluded that students became obliged to engage with their learning in a deep manner.

Kearney (2013) was quite critical of traditional assessments in higher education, stating that “students are significantly and detrimentally disengaged from the assessment process” and that they “do not address key issues of learning” (p. 875).  He attempted to conceptualize a model to improve student engagement by using authentic self- and peer-assessment for learning to enhance the student learning experience.  He proposed the authentic assessment for sustainable learning (AASL) model, which combines self-assessment, peer-assessment and instructor assessment to produce a summative grade for the student.  In this model, the instructor assessment accounts for 40% of the overall grade; two peers collaboratively mark another student’s anonymous task, with each peer’s mark accounting for 15% of the final grade; and lastly, the students mark their own task against the criteria, with the perspective of having seen one of their peers’ assignments, which accounts for the final 30% of the grade.

These papers have really caused me to wrestle with the idea of traditional assessments that require students to individually complete what they might deem to be a meaningless task in order to be judged by the instructor.  Multiple choice, true and false, matching, short and long answers – no matter how varied those types of questions seem to be within the format of a traditional assessment, they all have the capability of falling into one of the evaluation traps.  In fact, traditional assessments themselves appear to be the problem – and regardless of how well they are prepared and written, they all have the potential to fall into any one or all of the evaluation traps.

Decisional

It is time that I put everything I’ve learned about traditional evaluations behind me.  In fact, I am probably going to shred the “exam checklist” that has been guiding me to properly create evaluations for the past several years.  Traditional assessments are too vulnerable to the four traps.  They tend to only measure what’s easiest to measure, they underestimate the learning embedded in evaluation, they take all of the power out of the hands of the students and place it squarely in the hands of the instructor, and they focus almost entirely on the short-term.  

I would like to scrap all of the tests that I currently have and have been making my students write for the past several years.  I want the 30% of the final grade that is currently allotted for “tests” to be entirely redirected to more progressive forms of authentic assessments (same goes for the 35% of the grade that is allotted for “final exam”).  Probably the biggest takeaway from PIDP 3100 and PIDP 3210 was to incorporate collaboration, reflection, integrated assessment, and real-world problems into every lesson that I teach.  And while I believe I’m making an honest effort at that so far, and I’ve experienced some of the benefits in my classroom already, this is the first instance where I’m realizing that my evaluations should equally incorporate those four critical aspects of adult learning.  I suppose this is the final connection in the concept of alignment between the course outline, instructional strategies, and evaluations.

While a complete rework of all of the evaluations I’ve developed so far is certainly daunting, it’s comforting to learn that there are existing models to help with this process and Kearney’s AASL is a perfect example.  There is still time this semester for me to incorporate that model into one of the modules that I teach in one of my courses.  I think a good and realistic first step is to substitute one of my upcoming traditional tests worth 10% of the final grade with an authentic assessment of some kind.

References

Ashford-Rowe, K., Herrington, J., Brown, C., (2014).  Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, Vol 39, No. 2, 205-222.

Fenwick, T.J., Parsons, J., (2009).  The art of evaluation: A resource for educators and trainers. 2nd Edition. Thompson Educational Publishing Inc.

Kearney, S., (2013).  Improving engagement: the use of ‘Authentic self- and peer-assessment for learning’ to enhance the student learning experience. Assessment & Evaluation in Higher Education, Vol 38, No. 7, 875-891.

Meyers, N.M., Nulty, D.D., (2009).  How to use (five) curriculum design principles to align authentic learning environments, assessment, students’ approaches to thinking and learning outcomes.  Assessment & Evaluation in Higher Education, Vol 34, No. 5, 565-577.

Advertisements