Program Outcomes


This is a written reflection on the idea of learning outcomes as they relate to curriculum development, as proposed by Bowen in Teaching Naked (p. 263).  He speaks about this in the context of “the naked curriculum” and specifically as it relates to improving curricular progression.


I was immediately struck by this topic because at the present time I am working on revising the curriculum of the occupational health and safety diploma program at my college.  It’s a daunting task and despite numerous key insights from the PIDP thus far, curriculum development feels like it’s still out of my league.  While I have improved as an instructor dramatically from the first time I was involved in this process six-years ago, especially from the sense of being able to envision an alignment between learning objectives, participatory learning activities, and assessments, translating that from a course level to a program level presents a far greater depth and breadth of complexity.  It’s definitely something that I’m currently struggling with and am challenging myself to dig deep and do as good a job as I possibly can.  After all, students who enter the program over the next five to seven years are going to have to live with these major decisions being made behind the scenes right now.  And that’s what’s motivating me.

What this course has taught me and where my educational philosophy is starting to lean is the fact that with each passing day content is becoming more and more irrelevant. Information is out there and increasingly it’s becoming freely available.  What I’m wrestling with is the fact that our program development meetings are always so focused on choosing the content that we have to cover and turning those into learning objectives on course outlines.  That will ultimately result in the content that we will be held accountable to deliver.  While I’m more interested in teaching students the processes of creative and critical thinking, self-regulation of learning, metacognition and reflection, and developing students with growth mindsets and 21st century learning skills, the institution seems much more focused on defining the content that has to be taught.  I definitely see merit in both aspects of these educational philosophies but I’m finding it very difficult to blend both sides of that coin into a cohesive curriculum.   


I’m grateful for the opportunity to develop a two-year diploma program from essentially scratch.  When I first arrived as faculty at my institution, the course outlines for this program were developed by a previous instructor with no experience in occupational health and safety.  None of the course material, however, was developed at the time.  I spent my first few years developing that content on the fly.  When I first had an opportunity to participate in a program review, I only decided on very minor changes as I was still going through the whirlwind of trying to figure out how to be a better teacher and develop more and better content.  As the old saying goes, I wish I knew then what I know now.  

The program needs a lot of changes in my opinion and the changes are major.  The main issue, in my view, is that it needs to be a more cohesive curriculum from start to finish. It needs a better focus on developing students from the time they enter the program until they graduate.  Right now, the courses operate as silos and while there are some great learning opportunities throughout the program, none of them are necessarily building on prior ones or really leading to anything bigger or better.  And I now realize that our program outcomes are the first thing that needs to be addressed.  Once those are developed, creating individual courses with learning objectives will be easier as they will have to be based on the ultimate goals.  In essence, there will be an overall map guiding the way.

Dr. Cathy Barrette, Director of Assessment at Wayne State University, provides a great analogy in relation to the difference between course and program learning outcomes. In a PowerPoint presentation about the topic that I retrieved online, she illustrates learning objectives as all of the individual, uncut vegetables and the program outcomes as the salad.  She states, “the degree of preparation and integration makes the salad.  The ingredients contribute to the salad, but a salad is more than the sum of its parts.” (Barrette, n.d., slide 5).  This simple analogy put things in perspective for me.  It made it clear to me that my belief in developing graduates who can creatively and critically think, who can execute problem-based learning projects, who can self-regulate and self-direct their own learning, who have 21st century skills and growth mindsets, and who leave as reflective practitioners, are indeed program outcomes.  This is the framework under which we can deliver content within.  And as long as content delivery is always framed through the lense of these program outcomes, then students should successively build on these outcomes throughout their two-years in the program so that they leave having mastered all of the learning objectives, and in turn, the ultimate goals.

Another key insight I’ve gained while researching program outcomes is the revised model of Bloom’s Taxonomy, which is illustrated beautifully online by Iowa State University’s Center for Excellence in Learning and Teaching (2017).  I really like how the cognitive processes (remembering, understanding, applying, analyzing, evaluating, and creating) are run through the various knowledge dimensions (factual, conceptual, procedural, and metacognitive).  I can see how this model can also help to provide a roadmap for program outcomes as well as learning objectives for specific courses.  For example, perhaps a first year, first semester course would focus on remembering and understanding within the factual and conceptual dimensions, but a final year, final semester course would focus on evaluating and creating in the procedural and metacognitive domains.  This model provides a great foundation for the framework of the program outcomes and the successive progression of student knowledge, skills and abilities.


This investigation has given me a much sharper picture of what needs to happen with our diploma program.

Firstly, program outcomes have to be developed.  I’ve spent some time trying to develop a first draft of some of my ideas.  Graduates will be able to:

  1. Contribute to the maintenance of a safe work environment by managing OHS administrative processes;
  2. Use a range of OHS tools and processes to implement OHS programs and integrate compliance with regulations in complex and non-routine environments;
  3. Conduct training to effectively transfer knowledge and skills to others;
  4. Work under defined responsibilities with general direction in changing contexts within broad but established parameters and determine when issues should be escalated to a higher level;
  5. Influence operational, supervisory and middle management staff and consultative groups across a designated area;
  6. Evaluate the wider implications of OHS strategies and activities for other functions and areas of the organization;
  7. Apply cognitive, communication and technical skills to identify, analyze, synthesize and act on information from a range of workplace sources to solve unpredictable problems in known environments.

In my view, these are broad outcomes (a salad) that need to be successively developed in students through the various learning objectives (the individual, uncut vegetables) that they will eventually master by the end of the program.  Every course that we develop, with its own specific learning objectives, has to be driven by these program outcomes as the end goal.  If learning objectives and the progression of courses and content are focused on developing students towards these goals, then instructional strategies as well as assessment and evaluation will become much clearer throughout the program.

At our next program review meeting, I am going to present these program outcomes with a plan to develop courses and their specific learning objectives based on these broad goals.  As Bowen (2012) states, “if all faculty expect students to be prepared for, and actively engaged, in class activities there will no longer be just one teacher who is trying to teach higher-order thinking and suffering for it in course evaluations” (p. 264).  I never thought that a course called, “Media Enhanced Learning” would eventually end with me going back to the foundations of andragogy.  But here I am.  And there I go.


Barrette, C., (n.d)  Course vs. Program Learning Outcomes: Analogies and Examples. Wayne State University.  PowerPoint Presentation retrieved from:

Bowen, J.A., (2012)  Teaching Naked: How Moving Technology out of Your College Classroom will Improve Student Learning.  John Wiley and Sons, Inc. Jossey-Bass; San Francisco  

Iowa State University, Centre for Excellence in Learning and Teaching (2017).  Revised Bloom’s Taxonomy.  Retreived from:


Contract Grading


This is a written reflection about contract grading.  The idea was touched on by Bowen (2012) when he outlined a course designed using the nomenclature of a multiplayer game (p. 67).  Essentially, students are awarded experience points for fighting monsters (quizzes, exams), completing quests (presentations, research), and crafting (analysis papers, concept documents).


This idea is really fresh in my mind as I have recently completed a course in which I implemented my first version of contract grading.  I developed thirteen assignment options that ranged from unit tests, to lab assignments, to interviewing an expert, to creating a digital project, and even a final exam.  Each assignment was worth a different point value depending on the time commitment, effort required, and perceived difficulty.  Students had to set an objective for themselves and chart their paths through the course material.  Effectively, they chose their own adventures through the assignment options.  

The students chose a wide range of options that involved vastly different paths.  Another point that Bowen (2012) touched on was evident in my class:  The fact that females are more collaborative and males are more competitive.  While this was certainly a small sample size, it was interesting to note that the four females in my class decided to work together to navigate the course material while the five males each set off on their own individual paths to see how much they could improve their GPA.

Overall, I was pleased with the results of this class.  I could sense a higher degree of motivation in my students as a result of them being in complete control of their learning and their evaluations.  Every student had to summarize their learning from each assignment and share it with the rest of the class, which helped create a strong sense of community.  I liked how students used prior knowledge and experience to take advantage of their strengths.  For instance, one student who is a videography enthusiast decided to interview an expert and sought permission to record the interview, which he then edited into a digital project that he posted online for his classmates.  End of semester feedback indicated that students enjoyed this format and that they would like other courses to be set up in a similar fashion.  It was definitely positive.  

But there were also some issues that need to be addressed in the future.  One example is student marks were inflated in comparison to a typical grading structure.  Students with average GPA’s finished with a very high grade in this class, which was a result of the volume of the evaluations they chose.  They did accomplish much more than would have normally been expected of them through this course.  So while I thought it was fair, there were some questions from my administrators about why the marks were so high in this course.  Also, I think the point values need to be revised as some of the assignments seemed over- or under-valued.  


Further investigation into the theory and application of contract grading in higher education has resulted in key new insights into this strategy.  

Hendricks (2012) notes that this theory did in fact start as a contact between the student and teacher which promises a certain grade based on the students living up to their end of the bargain.  This idea seems to have evolved into various incarnations.  The version that I used was noted by Hendricks, and used by Gerald Herman, a professor of History at Northeastern University, who “lets students choose how many assignments to complete out of a range of assignments.  Each one is worth a certain amount of points, and to get an “A” students have to get a total that could be reached by doing very well on a few assignments, or less well on more assignments” (Hendricks, 2012, par. 6).

Dr. Andy Johnson (2010), in a YouTube video titled, “assessment contract grading” discusses the idea of giving a certain letter grade based on meeting a certain number of objectives.  The objectives are things like, “attends and does the work for any 11 sessions with his or her writing group, participates in two individual conferences with the instructor, and contributes to the writing group and group journal.”  Students have to meet at least 5 of 7 of these objectives to earn an “A”.  

Another approach has been proposed by Elbow and Danielewicz (2009), which was covered by Billie Hara (2010).  They indicate that students can earn a “B” on the basis of completing a list of tasks, but if they want to earn an “A” then the quality of their work comes into play.  They state, “contract grading focuses wholeheartedly on the processes whereas conventional grading focuses much more on products, outcomes, or results” (p. 260).

Dr. Kathy Sanford, in a YouTube video titled, “Contract grading by Dr Kathy Stanford” discusses the fact that contract grading actually allows for a different relationship to form between the student and teacher, where the foundation of authoritative evaluation is replaced with trust and understanding.  I thought that was the case in my course this past semester as I noticed the interactions with students were much more focused on bouncing ideas off of each other and really guiding them rather than being demanding of them.  She also discusses the importance of learning community and how contract grading really aligns with that vision.  I would have to agree and I also noticed that in my course.  Rather than students believing they were ranked and ordered in accordance with the typical approach to grading, I thought students felt as though they were equal contributors to each others learning.


These key new insights have led to a decision to incorporate several revisions into the contract grading evaluation plan I use.  Firstly, I will make certain evaluations mandatory.  An example is the final exam.  There are certain constraints within the course outline that I simply won’t be able to stray from.  Also, while I like giving the students the control of exploring course content in whichever way they choose, a summative evaluation of their learning through that process is a valid piece of evidence of whether or not the approach worked for them.  In this particular course, the final exam was to be 35% of the final grade, which still would have given students the opportunity to determine the remaining 65% of their evaluations on their own.

I will continue to use the points system whereby students can earn a good grade by either doing a few assignments very well or by doing a greater amount of assignments at an average quality level.  I really think the points are a huge motivating factor and their relationship to gamification is a big part of that.  I agree with Warner (2016) in that I’m happy to trade slightly higher grades for more student work.  But I am interested in putting some thresholds on this approach.  I’m now thinking about incorporating more quality into the final grade.  For instance, if students don’t produce anything above an average quality then they can’t earn more than an average grade regardless of volume of work.  A score of 90 and above would be reserved for students who have completed one or more exemplar works.  This might help avoid everyone in the course earning 100%.

In an effort to focus this approach more on the process than on products, I am going to incorporate the frequency of contributions to the learning community into this grading system.  For instance, if a student doesn’t complete any assignments until the very end of the semester when he decides to hand in 5 or 6, there should be a penalty for that because this contribution is less valuable to the learning community.  One way around this will be a better “contract” at the beginning of the term.  Students will set their objective (i.e. the grade that they want), develop a plan to reach that goal (choosing the assignments they want to complete), and indicate the due dates for each of those assignments.  This will help hold the students accountable for making frequent and consistent contributions to the learning of their classmates.

Overall, there is a lot of merit in the contract grading approach to evaluation.  I witnessed it first hand in my initial attempt at this strategy.  While there is certainly room for improvement in how it was implemented my first time, with a few tweaks I think I can better align the concepts of this theory with the constraints of my institution. Ultimately, this is in line with the ideas that Bowen (2012) discusses in regards to games, customization, and learning and I’m confident that this approach can increase student motivation and help to develop a strong learning community.


Bowen, J.A., (2012)  Teaching Naked: How Moving Technology out of Your College Classroom will Improve Student Learning.  John Wiley and Sons, Inc. Jossey-Bass; San Francisco

Danielewicz, J., and Elbow, P., (2009)  A Unilateral Grading Contract to Improve Learning and Teaching.  College Composition and Communication, 61(2), 244-268

Hara, B., (2010)  Using Grading Contracts.  The Chronicle of Higher Education.  Retrieved from

Hendricks, C., (2012)  Contract Grading, Part 1.  You’re the Teacher. Retrieved from

Johnson, A. (2010)  assessment contract grading.  YouTube. [Dr. Andy Johnson channel] Retrieved from

Sanford, K. (2017)  Contract grading by Dr Kathy Sanford.  YouTube.  [Foliozone research channel].  Retrieved from

Warner, J., (2016)  I Have Seen the Glories of the Grading Contract.  Inside Higher Ed. Retrieved from

Learning Communities


This is a reflection on the concept of learning communities, as presented by Barkley in Student Engagement Techniques.  One of the conditions to promote synergy between motivation and active learning is creating a sense of classroom community.


This concept stood out to me for a few reasons.  Firstly, being enrolled in the online version of PIDP 3250, I believe that I am currently part of a learning community.  It is unlike any other learning community I’ve participated in – primarily because I think it feels more like a community than just about any other classroom or online learning experience I’ve had.  And that’s the feeling I get just after three weeks in the course.  So I’d really like to step outside the course for a moment, look down on it, and investigate what makes it such an effective community.

Secondly, I just really like the idea of my classes having a strong sense of community.  To me that means students are working together, have a mutual respect and understanding for each other, contribute to their own and others’ learning in a meaningful way, and everyone feels like they belong and have a sense of purpose for being actively engaged.

Lastly, I went out to brunch with a teacher friend a few weeks ago who was telling me how she’s found great success in implementing technology in her class to establish a community.  This piqued my interest and I’ve been meaning to investigate that further.  

So it’s all just fresh in my mind as all of these ideas from all different angles converge into a single reflection about relevant course content.  I’m motivated to look into this concept and I have a sense that my time spent doing so will have value.    


Creating a sense of classroom community promotes synergy between motivation and active learning which will strengthen student engagement.  The questions I want to explore are, what are classroom communities and how do we create a sense of classroom community?

Barkley (2010) promotes Cross’s (1998) definition of learning community as, “groups of people engaged in intellectual interaction for the purpose of learning”.  Watkins (2004), adds that the community not only learns together, but “about its collective process of learning”.  He adds that the “focus is on human processes for building social and learning relations” (p. 1) and that the community has certain hallmarks: members decide and review; belongingness develops; cohesion amongst members emerges; and diversity is embraced.  Watkins also identifies a number of processes that are likely to be present in a community: Active engagement with the community goal; bridge-building to other communities; collaboration to create joint projects; and dialogue to engage and progress.  Ultimately, the focus of classroom communities, while demonstrating these hallmarks and processes, is on learning, which usually occurs through enquiry and the creation of new knowledge.  

Advice on the development of classroom communities seems deeply rooted in online learning environments.  Cooper (2016) provides some advice to instructors on how to build an online learning community.  His six tips for instructors are: Be present at the course site; get your students involved; set expectations from the beginning; interact with students; invite questions, discussions and responses; and mix up the way that students are learning.  Pappas (2016) has some additional suggestions in this regard, such as, cultivating a personal connection and appointing online learning community leaders.

Browning (2016) provides some advice to instructors about classroom learning communities, facilitated through online learning management systems, and suggests transparency, modeling behaviour, and celebrating differences are keys to success.  She notes that discussion forums and blog assignments, “provide opportunities for students to communicate digitally with classmate that they might not speak to in person”.  She discusses an example of how she highlights student blogs and embraces different perspectives and honesty above all else, which promoted more open exchange of meaningful comments.         

Returning to Watkins (2004), his extensive research into the topic led him to summarize classrooms as communities with the following descriptions, “students are crew, not passengers…cooperative learning activities build supportive relationships that increase belonging and motivation..students take an active role in classroom governance.”  He outlines classroom practices as a number of elements – tasks, resources, social structure, and role – all operating around a central classroom goal.  His version of a classroom community is observed in the following points:

  • Students operate together to improve knowledge
  • Students help each other learn through dialogue
  • Learning goals emerge and develop during enquiry
  • Students create products for each other and for others
  • Students access resources outside the class community
  • Students review how best the community supports learning
  • Students show understanding of how group processes promote their learning
  • The classroom social structures promote interdependence
  • Students display communal responsibility including in the governance of the classroom
  • Assessment tasks are community products which demonstrate increased complexity and a rich web of ideas

All of the above advice seems to be enshrined in the online version of PIDP 3250.  And it all seems like what I’m after.  While I understand it may take quite a bit of practice, it certainly provides a vision, with some aims and objectives of how to start building better learning communities in my courses.


While I’m more interested in establishing a community inside my classroom for the several hours per week that I’m together with my students, I understand that having a central online hub can enhance the community outside of the classroom.  My college utilizes Desire2Learn (D2L), which from the outside looking in, is great, but students feel like it’s just another place they have to login.  I’ve tried other approaches to building an online community by trying to get involved in social media platforms like Instagram, Twitter and yes, even Snapchat at the request of my students.  The result though, is the community is starting to be spread too thin – there is no central online hub outside of the classroom.  Students access course material on D2L, they post pictures on Instagram and share stories on Snapchat.  I’ve been exploring ways to bring this all under one roof.

Perhaps exploring D2L further might help.  I notice it has a blog option, and I can certainly try to incorporate the discussion forum with more vigor.  The idea of having students facilitate their own discussion forum for a set time on an assigned topic is a fantastic idea that meets much of the previous advice about learning communities.  D2L also has the ability to allow students to reflect and maintain a portfolio but these are options that I just haven’t investigated fully.  I use it primarily as a repository for course content.  I’m going to meet with our instructional designers and seek support from our Teaching and Learning Centre with the specific goal of incorporating D2L as a blended approach to my classroom by maximizing the opportunity to participate in the community.  

A few weeks ago, a friend told me about how she has observed positive outcomes in her students after establishing a Google Community for her courses.  I joined a few Google communities over the past few weeks and they have been engaging.  They are much more aesthetically pleasing than D2L and even the fact that it’s called a “community” goes a long way for me.  There is potential for different students to moderate the community for given times on specific topics or in whatever way they choose.  Ultimately, it looks like a great option to bring course content under the same roof as social media by combining the best of both worlds.  It doesn’t seems like too great a leap to have my students connect to an established community with an existing google account.

In summary, I’m going to keep trying to establish the foundations of an effective learning community during class time.  I’m also going to explore the idea of creating a central online hub for my learning communities and give students the power to moderate and regulate their own learning in that format.  This might involve more blog projects, discussion forums, digital projects, but more importantly sharing all of that knowledge with each other, under one roof.


Barkley, E.F. (2010).  Student Engagement Techniques A Handbook for College Faculty.  San Francisco: Jossey-Bass

Browning, C. (2016).  Building an Online Learning Community That Fosters Relationships. Retrieved from

Cooper, S. (2016).  How to Build a Thriving Online Learning Community. Retrieved from

Pappas, C. (2016). 8 Tips to Build an Online Learning Community. Retrieved from

Watkins, C. (2004).  Classrooms as learning communities.  NSIN Research Matters, Institute of Education, University of London, Autumn 2004, No. 24, p 1-8



This is a reflection to the relationship between motivation and student engagement in the college classroom, as presented by Barkley in Student Engagement Techniques.  


Student engagement is a term used often at my place of work.  I probably hear it at least monthly at staff meetings, in the lunchroom, or in passing conversations.  The only reason I mention that is because like much of what I’ve learned so far in my journey through the Provincial Instructor Diploma Program, it boggles my mind that nobody has, in the 7 years as a college instructor, explained some fundamental concepts of adult education to me.  And once again, I was blown away by the simple Venn diagram that illustrates student engagement at the intersection of motivation and active learning.  So simple yet so profound.  Because all three of those terms are something I’ve become familiar with in my profession.  But the relationship between them has never been so clearly demonstrated.  This basic diagram gave me pause and created a vision for my future in this profession.  It piqued my interest in the textbook, in this course, and in putting these concepts into practice as soon as possible.

I was quickly brought back down to earth as I continued to read on about student motivation.  Terms like self-efficacy, attribution theory, self-worth models, and flow helped me realize I was a long way from discovering the holy grail of teaching.  There is obviously a lot of work ahead of me in order to get a good grasp of the concept of student engagement.  But what better place to start than further investigating the idea about the product of expectancy and value.


The key insight gained from reading the first two chapters of Student Engagement Techniques is the expectancy-value model of motivation.  Effectively, students’ motivations are influenced by what they believe they can accomplish and what they think is important.  

To me, the idea of expectancy (what a student believes he can accomplish) is best described by Bandura’s self-efficacy theory.  There is a great deal of research involving the relationship between self-efficacy and motivation.  Chang et al (2014) investigated the effects of online college students Internet self-efficacy on learning motivation and performance.  They concluded that students with high Internet self-efficacy outperformed those with low Internet self-efficacy and encouraged educators to identify the psychological characteristics of online learners to provide suitable support for their learning.  The study integrated Keller’s proposed model of attention, relevance, confidence, and satisfaction (ARCS), which has been found to be beneficial to the improvement of learner motivation.  Ultimately, the study found that Internet self-efficacy helped students transform motivation into learning action, which improved learning performance.  

Hsieh (2014) investigated the relationship between different types of learning motivation, engagement behaviours and learning outcomes in undergraduate students in Taiwan.  One of the outcomes of this study was that student background characteristics (such as gender, socioeconomic status, and major) and their learning motivation (intrinsic, extrinsic, task value, or self-efficacy) were more important predictors of learning outcomes than student engagement behaviours (such as active participation, interaction with instructors, and cognitive effort).  To me, this means that regardless of how well designed an instructional strategy or a participatory learning activity is, if it is not founded on the proper degree of motivation, it may very well fall flat.  

Barkley (2010) uses Csikszentmihalyi’s concept of “flow” to explain value.  This concept describes a state of deep intrinsic motivation where action and awareness merge.  She goes on to provide Wlodkowski’s suggestions about how to help students achieve a sense of flow: “(1) goals are clear and compatible, allowing learners to concentrate even when the task is difficult; (2) feedback is immediate, continuous, and relevant as the activity unfolds so that the students are clear about how well they are doing; and (3) the challenge balances skills or knowledge with stretching existing capacities” (Barkley, 2010, p. 14).  

As I look back on life and try to remember times where I experienced flow, my memory keeps recalling video games.  I distinctly remember explaining to my friends why I enjoyed engaging with certain games – because it felt like an escape and I wasn’t thinking about anything else happening in the world or life while I was playing.  I can clearly make a connection between those times of complete immersion and Wlodkowski’s suggestions of achieving flow.  Looking back, those games had clear and compatible goals, feedback was immediate and continuous, and the challenge was appropriately difficult.  I suppose I was properly motivated through expectancy and value.  Wheeler (2016) seems to agree with this idea as one of his suggestions for achieving flow in students is games and gamification.  He also suggests role play, simulation and problem solving as other forms of immersive learning.  


There are a number of key takeaways that I have decided to move forward with as a result of my investigation into student motivation and in particular the expectancy-value model.

Firstly, I have to improve the establishment of self-efficacy and flow into my lessons. While I think I have made great strides this academic year in providing more active learning opportunities for my students, I now fear that they may not be founded on proper motivation.  This investigation has enhanced the importance of the bridge in for me.  I now see it as something much more than just grabbing the students’ attention.  It is an enormous opportunity to base the lesson on expectancy and value and keep students motivated and engaged throughout the participatory learning activities.  Taking that a step further, I should build in many more bridge ins throughout a lesson.  As an example, I am currently teaching a four hour class where each lesson begins with a bridge in followed by multiple participatory learning activities and concluding with a single summary.  It may be much more beneficial to plan these lessons to incorporate motivation enhancing bridge ins at the beginning of each participatory learning activity followed by a summary of that learning activity in hopes of helping the students achieve a sense of satisfaction.  This can also be done after breaks and during the introduction of a course assignment.  Ultimately, I should maximize the opportunities to enhance student motivation with every student engagement technique that I decide to use.  I am going to further investigate the ARCS model to achieve this.

Another takeaway from this reflection is for me to do a more effective job of learning about my students.  I have the benefit of teaching in a fairly close-knit program that provides ample opportunity to establish and maintain good relationships with my students.  If I can incorporate learning about their self-efficacy, their learning motivations, what they value and expect, it may help me get more in tune with how I can support their learning experience.  I am going to research surveys that could help me obtain data about these characteristics about my students.

Lastly, I am going to focus more on planning immersive learning experiences where appropriate.  Gamification is a primary example.  One success earlier this semester was a Kahoot! quiz that followed a video that we watched.  Students competed for a small prize during the quiz and I felt like it really increased engagement with the video.  The students were certainly immersed in the competition.  While this happened almost by accident, I can now see that the success may have been based on the expectancy-value model. Another success was a role play scenario that I established.  Where I would have normally lectured to my students about joint health and safety committees, this semester we established our own committee and each student was given a role to play at our mock meeting.  The students were immersed for a straight sixty minutes contributing to the meeting and fulfilling their roles.  Once again, it happened partly by accident, but I now see that this fits squarely within the expectancy-value model.  I will continue to investigate opportunities to incorporate gamification and role play wherever I can.


Barkley, E.F. (2010).  Student Engagement Techniques A Handbook for College Faculty.  San Francisco: Jossey-Bass

Chang, C-S. (2014)  Effects of online college student’s Internet self-efficacy on learning motivation and performance.  Innovations in Education and Teaching International, Vol. 51, No. 4, 366-377.

Hsieh, T-L. (2014).  Motivation matters? The relationship among different types of learning motivation, engagement behaviors and learning outcomes of undergraduate students in Taiwan.  Higher Education, Vol. 68, 417-433.

Wheeler, S. (2016).  The Flow Theory in the Classroom: A Primer.  Retrieved from

Reflective Practice


This is a written reflection to, “Simply having experiences does not imply that they are reflected on, understood or analysed critically.  Individual experiences can be distorted, self-fulfilling, unexamined and constraining” taken from Brookfield’s text The Skillful Teacher.


This reminds me of the famous John Dewey quote, “we do not learn from experience… we learn from reflecting on experience.” And while I have already reflected on the importance of student reflection on learning experiences in previous PIDP assignments, this time around the quote resonates with me because of the importance of reflecting on my own teaching practice. It is certainly something I haven’t engaged in formally by using any kind of record or model, but as I get more and more behind reflective practice it is certainly something that I should make a regular part of my profession. Especially now that I have quite a bit of experience with the focused conversation model.

Whenever incorporating new instructional strategies, new assignments, evaluations, or assessment techniques, it is critical to reflect on how it went for the students and what further improvements could be made in the future. There are things I did as an instructor this past semester that I know I won’t do again. One example is that my heightened sense of the importance of collaboration led me to deliver an assignment worth 10% of the final grade that had to be completed with a learning partner. This was far too high stakes and far too intense a collaboration at this stage of my students’ development.  I will take it much slower with collaboration in the future, focusing more on instructional strategies and assessment techniques.

While technically this constitutes a reflection, it was more by happenstance than something done with intention, primarily a result of overwhelmingly negative student feedback about the assignment.  But I should focus on making reflection a much more regular part of my profession – with intention and by using the focused conversation model.


One of the terms that keeps revealing itself as I research reflective practice is “critical reflection.”  According to Larrivree (2010), the term critical reflection “merges critical inquiry, the conscious consideration of the ethical implications and consequences of teaching practice, with self-reflection, deep examination of personal beliefs, and assumptions about human potential and learning.”  She warns that failing to practice critical reflection leaves instructors, “trapped in unexamined judgements, interpretations, assumptions, and expectations.”  This can obviously be detrimental to an instructor and his students.  

Ghaye (2010) discusses reflective practice in the context of strength-based thinking, which “explicitly emphasizes reflecting on strengths so as to identify them, play to them and develop new ones.”  He advocates an approach to critical reflection that strikes a balance between both strengths and weaknesses.  There are a number of guiding questions provided to being the journey of strength-based critical reflection, such as “What was your best day at work in the past three months? What were you doing? Why was it the ‘best day’?” and, “What was your worst day at work in the past three months? What was going on?”  This is an interesting approach to reflective practice that I believe is an important aspect of teaching and learning – identifying and overcoming weaknesses while focusing and further developing strengths.  

In respect of how to critically reflect, Teaching in the Lifelong Learning Sector, Peter Scales recommends keeping a professional development journal, which he describes as “a written record of your experiences of, and feelings about planning, preparing and delivering teaching and learning.  It will contain general accounts of learning sessions but, more importantly, will identify critical incidents which can be the basis for learning and continuing professional development.


It seems clear to me that keeping a reflective journal as part of my teaching practice is an essential component of my continual development as a competent instructor. I will begin keeping a reflective journal immediately.  This is a great opportunity to do so as a new semester is just beginning.  I will update my professional development plan with the goal of maintaining a reflective journal.  

Having just started to use Microsoft OneNote for all my note-taking needs, it will be a good opportunity to start a new notebook in this application specifically for critical reflections related to my teaching practice.  I will identify critical incidents this coming semester and reflect on them using the focused discussion model.  The other model that may also be effective in certain circumstances is the “What? So what? Now what?”, outlined by Scales (2012).

I will use my critical reflections to help plan instructional strategies and evaluations and to manage my teaching practice in the future.  The ultimate goal of this endeavour is to ensure there is a continuous improvement cycle in operation, for the benefit of my students.


Brookfield, S. D. (2015). The Skillful Teacher: On Technique, Trust, and Responsiveness in the Classroom.  San Fransisco: Jossey-Bass

Ghaye, T. (2011)  Teaching and Learning Through Reflective Practice: A Practical Guide for Positive Action. New York: Routledge  

Larrivee, B. (2010)  Transforming Teaching Practice: Becoming the Critically Reflective Teacher.  International and Multidisciplinary Perspectives, Volume 1, Issue 3, Pages 293-307

Scales, P. (2012)  Teaching in the Lifelong Learning Sector, London: Open University Press

Student Satisfaction


This is a written reflection to, “I find myself repeatedly frustrated by not achieving an unblemished record of expressed student satisfaction for every week of the course” taken from Brookfield’s text The Skillful Teacher.


I realize that pleasing all students at all times is an impossible task.  It kind of reminds me of something I teach about: indoor air quality standards.  Good indoor air should satisfy 80% of the occupants.  Even a good quality indoor air will leave 20% of occupants unsatisfied for one reason or another.  It’s kind of like teaching.  Even on your best day when everything goes according to plan and it feels like all of the students participated and learned something, some of the students may have been left unsatisfied.  

It’s strange then, that being perfect all the time and pleasing every student every class has somehow become an expectation.  It really is foolish to think that there won’t be some level of dissatisfaction among students at any given time.  There are certainly an infinite number of variables that can cause it.  So perhaps making an effort to minimize it as much as possible is a more realistic goal than eliminating it entirely.

Interestingly, this reflection has been written on the heels of last semester’s student survey results.  And while the vast majority of my students seemed to have a positive experience, one student clearly had a horrible experience.  While this wasn’t a surprise to me, based on how the semester went with this student, I’ve thought long and hard about how things could have been different with that student had I addressed her resistance to learning early in the semester and been more in tune with the emotions involved in the teaching and learning environment.


I decided to investigate how to increase student satisfaction.  Hopefully I can determine some ways to minimize student dissatisfaction (and my own dissatisfaction when failing to satisfy all of my students all of the time).  In the December 2010 edition of Reflections, published by Queen’s University Belfast, Phil Race, Emeritus Professor at Leeds Metropolitan University, describes a meeting where 50 university professors gathered to discuss student satisfaction.  The meeting was called in response to a declining trend in student satisfaction in higher education as determined by the National Student Survey in the UK.  While the group of professors pointed out a number of flaws with the survey itself, they went on to identify what “bugs” students and what factors instructors can focus on to transform the student experience in higher education.

While there was a long list of items that “bug” students, the following stood out to me: Slow, or no feedback; Lack of communication/connection between lecturer and student; Not being treated with respect.  These items deal with the ‘softer’ side of teaching – the art of teaching – much more than the science of learning, and it is a personal focus of mine to improve in this area before the end of this academic year.  There is much room for improvement with respect to responding to student resistance and being sympathetic to the emotions involved in the teaching and learning environment.  Hopefully I can prevent another student from having a horrible experience in my class.


Going into the new semester, there are a number of new strategies that I will employ based on the PIDP 3260 – Professional Practice course – and in particular my reading of The Skillful Teacher.

First and most importantly, I believe that I will be ready to expect student resistance to learning and be much more prepared to deal with it than in previous semesters, where I simply jumped to the conclusion that students were bad and lazy.  I would like to try to address any resistance to learning on a personal level by meeting with students as soon as possible outside of class to attempt to determine what the resistance is being caused by.  This will allow me to address the situation immediately rather than let it fester, which is what got me into trouble last semester.

Secondly, I will use many more feedback tools at various points in the semester to establish better communication and connection with my students.  I plan to get and respond to feedback at the end of every lesson.  Further, I would like to do a more formal assessment of their satisfaction at the quarter point of the semester.  This will lead into a small group instructional feedback session at midterm and conclude with the same quarter-term feedback tool at the end of the semester.  Overall, I think this will help the students feel like I respect their collective voice by being willing to listen and respond to it.            

Race, P. (2010).  Increasing Students’ Satisfaction.  Reflections, December, Queen’s University Belfast.  Retrieved from

Brookfield, S. D. (2015). The Skillful Teacher: On Technique, Trust, and Responsiveness in the Classroom.  San Fransisco: Jossey-Bass

Student-Generated Test Questions


This is a reflection on a video previously prepared for PIDP 3230, which was published to YouTube on April 10, 2016, by Emma Leigha Munro, titled “Student Generated Test Questions” on the channel called “SUBSCRIBE Thanks :)”.


I really enjoyed this video, created using PowToon, which is online software for developing animated presentations.  The video really caught my eye and had me more engaged and interested than many of the videos I watched that were created with screencast-o-matic.  The reason I found this video is because I was initially interested in student-generated test questions, but the link to an example video for this classroom assessment technique (“CAT”) on the PIDP 3230 course page redirected to a page in which the video was not available.  As I was committed to reflecting on this topic, I searched the internet in hopes for another student submission and sure enough, the first video result was Emma Leigha Munro’s YouTube video.  This helped solidify two important ideas for my upcoming video assignment:  try to make the video as engaging as possible with appropriate video, graphics, animations, and audio; and make the video available on YouTube and in turn, the public search record so that it is more likely to be found and hopefully teach someone in the future.

The reason that this particular CAT intrigues me is because of one of the major pros: “when students suggest test questions and try to predict the actual test questions, they are – in effect – beginning to prepare, in useful ways, for the upcoming test” (Angelo & Cross, 1993, p. 243).  This is a logical progression from one of the main principles of my first reflection, which investigated the notion proposed by Fenwick and Parson (2009) that evaluation can be the most vital, permanent part of the learning experience (p. 9).  Therefore, a CAT that helps amplify the learning experience of an upcoming evaluation seems like an incredibly powerful tool.  


The significance of this CAT for me is the potential for immediate application in one of the courses I am currently teaching.  This is, of course, one of the major motivating factors for adult learning.  

In my toxicology course this semester, I have already committed to employing collaborative testing.  For the first test of the semester, a two-stage model proposed by the Carl Wieman Science Education Initiative (2014) was used.  My thought is that using some class time for the two to three weeks leading up to the next test, the students can collaborate to prepare student-generated test questions, which will make up the collaborative portion of the next test.  This seems to be a way of still incorporating a collaborative portion of the test, though during the evaluation students will complete this portion individually.  The collaboration will happen as part of an informal assessment technique during class time leading up to the formal evaluation.

I have tried using student-generated test questions before, but after reading about this CAT as proposed by Angelo and Cross (1993) there are a number of things that I didn’t quite get right that I will need to improve for this upcoming attempt.  The first and most noticeable shortcoming I had previously was the timing of this CAT.  Angelo and Cross (1993) recommend facilitating this CAT at least two or three weeks prior to a formal evaluation (p. 240-241), whereas previously I had given students only one week.  Secondly, I didn’t really use any kind of procedure for this CAT when I tried it previously – it was carried out rather sloppily and perhaps without enough purpose.  Therefore, the step-by-step procedure proposed by Angelo and Cross (1993) is incredibly helpful.  This is especially true for the third step in the procedure, which requires a detailed explanation to the students of the requirements and benefits of this CAT (p. 242).  Lastly, one of the caveats proposed by Angelo and Cross (1993) is, “do not promise categorically to include them on the test” (p. 243), which is unfortunately what I did previously.    

Therefore, with some slight adjustments, improvements, and adaptations, I think this can be an incredibly useful tool that I am ready, willing and able to incorporate immediately into one of my courses this semester.  


My six students are currently working through the major topic of toxicology called absorption.  From the course outline, there are four learning outcomes for this major topic: Explain the process of absorption; Outline the methods of transport across cell membranes; Explain the factors that affect the rate of absorption; and Describe how toxins are absorbed by the skin, the gastrointestinal tract, and the respiratory tract.  

My idea for carrying out this CAT is to focus on these learning outcomes.  I would like students to come up with one type of question for each learning objective.  For instance, each student will submit one multiple choice, one true or false, and one short answer question for each learning objective.  Therefore, each student will submit a total of twelve questions and sample answers for review and feedback.

A handout to each student will be provided, explaining the process of this CAT and the benefits of participating in it.  For this particular activity, students will be assigned a learning partner to collaborate with in the development of the questions.  In this particular case, a sample of well-developed and relevant questions will be used to form the collaborative portion of the upcoming test, which is worth 15% of the test grade.  

My idea is to establish four learning stations in class – one for each of the above-noted learning outcomes.  At each learning station, a handout that provides guidance on developing each type of question will be provided for reference.  Each set of learning partners will be assigned to a particular learning station.  Given approximately twenty minutes at each learning station, the learning partners will prepare one multiple choice question, one true or false question, and one short answer question with sample answers.  The learning partners will rotate through the learning stations until they have generated questions for all of the learning outcomes.  I will need the two-hour block of class to complete this activity.    

 In the next class, I will provide feedback on questions that were not developed in accordance with the guidelines and show examples of some of the questions that were well developed.  Students will be given an opportunity to revise their questions based on the feedback given.  

Finally, I will use an appropriate sample of student-generated test questions to make up the collaborative portion of the upcoming test on absorption, which will account for 15% of the test grade.  Given the fact that the students collaborated to develop the questions, and had further opportunity to discuss the questions they developed and the answers they provided outside of class time, the marks may be slightly inflated.  But considering that this is the collaborative portion of the test, which will only account for 15% of the total test grade, and the class policy is that the collaborative portion of the test cannot lower an individual grade, this is appropriate in this case.  

After all, I am more concerned about using this CAT to amplify the learning experience created by the formal evaluation, which can be the most vital, permanent part of a learning experience.  I feel like if this CAT is well executed then the students have a great opportunity to really learn the material and have it stick in time for the test and beyond.


Angelo, T.A., Cross, K.P., (1993).  Classroom assessment techniques: A handbook for college teachers. 2nd Edition. Jossey-Bass, John Wiley & Sons Inc.

Carl Wieman Science Education Initiative, (2014).  Two-stage exams.  Retrieved from

Fenwick, T.J., Parsons, J., (2009).  The art of evaluation: A resource for educators and trainers. 2nd Edition. Thompson Educational Publishing Inc.

Group Grading


Working effectively in groups is a 21st Century skill that is becoming increasingly important.  This is my reflection on how we can effectively evaluate group work and projects.  Further, what checks and balances can be put in place to improve the accuracy of group grades?


This choice of topic immediately stood out to me because of it’s natural, logical progression from previous reflections I worked on in PIDP 3100 – Foundations of Adult Education.  Those reflections led to me putting forth a concerted effort to incorporate collaborative work during class time and also on some evaluations in my courses this semester.  Those initial reflections and research helped me understand the undeniable value of teaching 21st Century skills – especially in technical programs, where students are expected to immediately contribute to the workforce upon graduation.  Now that I have started incorporating some collaborative work into my courses, I have discovered some challenges.  One of them is how to assign groups in a fair and transparent manner, but the other is most certainly evaluating the group work.  Therefore, investigating this issue further is something that I am motivated to do and something that I know will start to have an immediate impact on my practice.


There is a lot of peer-reviewed research on evaluating collaborative work, but one of the most interesting and relevant papers that stood out to me was by Clark & Jasaw (2014). They introduced the concept of “triangulation” to evaluate multidisciplinary groups of students carrying out action research projects in rural Ghana.  The triangulation approach used evaluations from all three stakeholders, which in this case was the groups themselves, the instructors facilitating the projects, and the community that the work was being done for.  The key insight gained from this method is to use a multi-layered approach to evaluating group work: a form of self- and peer-assessment; instructor assessment; and real-world integrated assessment all contributing to the overall evaluation.  This seems like an effective check and balance to help improve the accuracy of group grades, rather than simply relying on a single source of evaluation, such as the group itself or the instructor.

One of the concepts that really caught my attention in previous reflections on 21st Century learning skills was the idea of evaluating the process that a group uses to arrive at a solution, rather than evaluating the final product itself.  Revisiting this concept is helping me further understand the value and importance of group work.  To me, it is not the final product of the group work that is most important, it is the process that the group used to arrive at the final product that should be what the student takes from these learning experiences.  Piercy (2013) supports this idea, having had groups of business students evaluate an experiential learning activity through collaborative reflection.  His groups of students worked through a real-world scenario.  After the exercise, the groups had to present on their performance and discuss any key issues that they experienced.  Each group was required to produce an assessed report where they analyzed their performance and related it back to specific learning outcomes.  He posits that this collaborative reflection truly helps solidify the learning experience of working with a team.  Therefore, if collaboration is being used to develop 21st Century skills, perhaps using reflection as a form of self-assessment is an effective approach to evaluating the process and not the final product.            

Several challenges of evaluating group work to develop 21st Century skills have been published.  Zulfqar & Shah (2013) reported that their accounting students did not enjoy working in groups nor did they enjoy participating in oral presentations, even though they recognized the importance of developing 21st Century skills.  They concluded that as long as there was a robust and transparent system of evaluating students’ performance in place, staff and students prefer group work to be a summatively assessed part of the curriculum.  They also concluded that although staff favoured the use of peer-assessment, that students felt uncomfortable performing this activity.  This resonates with me because I have also experienced similar sentiments from students already, in the few instances of collaborative work that I have implemented in my courses.           


In one of my first major attempts at using collaboration to help develop 21st Century skills, I paired students to produce a technical report based on noise measurements that were taken in an engineering workshop during one of our labs.  In the past, this assignment would have been completed individually and the grade would have been based entirely on the instructor evaluating the final product in relation to a rubric.  

This semester, I decided to revise this evaluation to incorporate collaboration by working with a learning partner.  I thought this might help weaker students improve but also help challenge stronger students by giving them an opportunity to help a peer.  Also, rather than the grade of this assignment be based solely on the instructor evaluating the final product, I gave students an opportunity to collaboratively self-assess their own technical report in accordance with the rubric, by submitting a short rationale.  Lastly, I asked students to assess their peer by completing a short form that assessed how effectively their partner collaborated and how much work they contributed to the final product.  Overall, I thought the assignment was much improved from previous deliveries.  However, in hindsight, the self-assessment was still focused on the final product and not the process, and the peer-assessment seemed ineffective as all students graded their peers a perfect score (which I suppose is possible, but not very likely to be accurate).  

Based on the research reviewed as part of this reflection, I would revise this assignment in the future by eliminating the peer-assessment portion of the grade, and rather than a self-assessment of their final report in accordance with a rubric, I would have each set of students provide on oral presentation on the process that they went through to arrive at the final product and their self-assessment of that process.  This form of collaborative reflection and self-assessment appears to be much more effective at capturing the value of working in pairs and solidifying the importance of the process rather than the final product.  This would accomplish several things:  it would still offer a multi-layered approach to the evaluation whereby the final grade constitutes student self-assessment and instructor evaluation; it would widen the opportunity to collaborate because students would have to continue to work together beyond the production of the final report; it would enhance the importance of collaboration by having students reflect on the process together; it would provide further learning opportunities on the process of collaboration by having the students share their experiences with the rest of the class.

Overall, there does not yet seem to be an exact science to grading group work.  There are, however, some very effective approaches depending on the learning outcomes of the activity.  A multilayered approach of self- and peer-assessment, teamed with reflection seem to be at the forefront of effective group work grading because they help provide checks and balances to improve the accuracy of group grades.


Clark, G., Jasaw, G.S., (2014).  Evaluating team project-work using triangulation: lessons from communities in northern Ghana.  Journal of Geography in Higher Education, Vol. 38, No. 4, 511-524.

Piercy, N., (2013).  Evaluating experiential learning in the business context: contributions to group-based and cross-functional working.  Innovations in Education and Teaching International, Vol. 50, No. 2, 202-213.

Zulfiqar, S., Shah, A., (2013).  The use of group activities in developing personal transferable skills.  Innovations in Education and Teaching International, Vol. 50, No. 3, 297-307.

Four Traps in Evaluation


This is a reflection on the four traps in evaluation as contemplated by Fenwick and Parsons (2009), which are noted as:  Measuring what’s easiest to measure, underestimating the learning embedded in evaluation, unexamined power, and reductionism (p. 9-11).  


These four traps in evaluation really stood out to me.  In large part it’s because they gave me an opportunity to take a long, hard look in the mirror.  Unfortunately, I didn’t like what I saw.  For instance, measuring what’s easiest to measure is a trap that I fell into as recently as last week.

In a course that I’m currently teaching, one of the learning objectives requires students to explain how the degree of causality is measured by epidemiological studies.  This was one of thirteen learning objectives under the general principles of toxicology and in hindsight a pretty minor part of the module.  However, as I reviewed the textbook while preparing the first test for this course, I noticed the ten factors of causality outlined quite nicely and thought, “this would make a really nice matching question.”  The other thing I liked about that idea was how easy it is to grade matching questions – just quickly check a pre-determined list of letters.  Looking back on that after reading the four traps in evaluation, I really questioned myself about that decision.  Was their ability to match those terms and statements really worth ten out of sixty marks on the first test?  Did that even evaluate the learning objective?  Was I just measuring what was easy to measure?  I didn’t like my honest answers to those questions and I didn’t feel particularly good about it.

Further, I really connected with the second trap: underestimating the learning embedded in evaluation.  It was interesting to read about learners being highly susceptible to learning during evaluations.  I had never thought of an evaluation as a learning tool – only ever as something to test the students.  Thinking about an evaluation as a learning tool that helps cement the learners’ knowledge and as something that learners remember about the course really shifted my perspective on how powerful evaluations can be.  But they have to be prepared properly and my hope is that I don’t fall into any more of these traps.    


There is a long road to travel before I start preparing effective evaluations.  Perhaps having a method or process for creating effective evaluations is a way of avoiding the four traps.  I started looking into methods and processes, and another term started popping up quite frequently and really caught my eye: authentic.  Even in the initial stages of my investigation, I started to question myself in an even deeper way.  Questions like, “was that matching question the right way of evaluating that learning objective?” became, “was this type of pen-to-paper test even the right way to evaluate the students?”

Ashford-Rowe, Herrington and Brown (2103) established the critical elements that determine authentic assessment with the purpose of formulating an effective model for task design.  The eight critical elements of authentic assessment noted by the study were: it should be challenging; the outcome should be a performance or product; it should ensure transfer of knowledge; metacognition should be a component; a requirement to ensure accuracy should be present; the environment and the tools used to deliver the task should be real; an opportunity to discuss and provide feedback should be provided; and collaboration should be a component.  They concluded that students responded well to tasks that had been designed to incorporate these critical elements of authentic assessment, as a result of “a clear understanding of the ultimate workplace benefits of having to produce authentic outcomes within authentic environments, with the use of authentic tools as part of the learning assessment” (p. 220).

Five curriculum design principles were used by Meyers and Nulty (2009) to align authentic learning environments and assessments with third-year undergraduate students in environmental and ecological studies.  The curriculum design principles that they incorporated in their teaching and learning materials, tasks and experiences were:  authentic, real-world relevance; constructive, sequential, and interlinked; progressively higher order cognitive processes; alignment; and challenging.  What they created were, “learning environments that consisted of a broad range of learning resources and activities which were structured and sequenced with an integrated assessment strategy” (p. 565).  They concluded that students became obliged to engage with their learning in a deep manner.

Kearney (2013) was quite critical of traditional assessments in higher education, stating that “students are significantly and detrimentally disengaged from the assessment process” and that they “do not address key issues of learning” (p. 875).  He attempted to conceptualize a model to improve student engagement by using authentic self- and peer-assessment for learning to enhance the student learning experience.  He proposed the authentic assessment for sustainable learning (AASL) model, which combines self-assessment, peer-assessment and instructor assessment to produce a summative grade for the student.  In this model, the instructor assessment accounts for 40% of the overall grade; two peers collaboratively mark another student’s anonymous task, with each peer’s mark accounting for 15% of the final grade; and lastly, the students mark their own task against the criteria, with the perspective of having seen one of their peers’ assignments, which accounts for the final 30% of the grade.

These papers have really caused me to wrestle with the idea of traditional assessments that require students to individually complete what they might deem to be a meaningless task in order to be judged by the instructor.  Multiple choice, true and false, matching, short and long answers – no matter how varied those types of questions seem to be within the format of a traditional assessment, they all have the capability of falling into one of the evaluation traps.  In fact, traditional assessments themselves appear to be the problem – and regardless of how well they are prepared and written, they all have the potential to fall into any one or all of the evaluation traps.


It is time that I put everything I’ve learned about traditional evaluations behind me.  In fact, I am probably going to shred the “exam checklist” that has been guiding me to properly create evaluations for the past several years.  Traditional assessments are too vulnerable to the four traps.  They tend to only measure what’s easiest to measure, they underestimate the learning embedded in evaluation, they take all of the power out of the hands of the students and place it squarely in the hands of the instructor, and they focus almost entirely on the short-term.  

I would like to scrap all of the tests that I currently have and have been making my students write for the past several years.  I want the 30% of the final grade that is currently allotted for “tests” to be entirely redirected to more progressive forms of authentic assessments (same goes for the 35% of the grade that is allotted for “final exam”).  Probably the biggest takeaway from PIDP 3100 and PIDP 3210 was to incorporate collaboration, reflection, integrated assessment, and real-world problems into every lesson that I teach.  And while I believe I’m making an honest effort at that so far, and I’ve experienced some of the benefits in my classroom already, this is the first instance where I’m realizing that my evaluations should equally incorporate those four critical aspects of adult learning.  I suppose this is the final connection in the concept of alignment between the course outline, instructional strategies, and evaluations.

While a complete rework of all of the evaluations I’ve developed so far is certainly daunting, it’s comforting to learn that there are existing models to help with this process and Kearney’s AASL is a perfect example.  There is still time this semester for me to incorporate that model into one of the modules that I teach in one of my courses.  I think a good and realistic first step is to substitute one of my upcoming traditional tests worth 10% of the final grade with an authentic assessment of some kind.


Ashford-Rowe, K., Herrington, J., Brown, C., (2014).  Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, Vol 39, No. 2, 205-222.

Fenwick, T.J., Parsons, J., (2009).  The art of evaluation: A resource for educators and trainers. 2nd Edition. Thompson Educational Publishing Inc.

Kearney, S., (2013).  Improving engagement: the use of ‘Authentic self- and peer-assessment for learning’ to enhance the student learning experience. Assessment & Evaluation in Higher Education, Vol 38, No. 7, 875-891.

Meyers, N.M., Nulty, D.D., (2009).  How to use (five) curriculum design principles to align authentic learning environments, assessment, students’ approaches to thinking and learning outcomes.  Assessment & Evaluation in Higher Education, Vol 34, No. 5, 565-577.

Flipping the Classroom


The quote I am choosing to reflect on is “The flipped classroom is…essentially reversing the traditional order…this approach fits adult education’s values of active learner engagement and self-direction” (Merriam & Bierema, 2014, p. 207).


The concept of flipping the classroom is of great importance in my current role as an instructor at the College of the North Atlantic – Qatar (“CNA-Q”).  CNA-Q has adopted Desire2Learn (“D2L”) as a learning management system and this initiative is slowly progressing from using D2L simply as a repository for course information to a truly blended learning experience for students in which the traditional classroom is flipped.   In order to get a little bit ahead of the curve, I thought this was an opportune time to further investigate the potential benefits of – and effective strategies for – a flipped classroom. Perhaps I can be ready to try to implement some of these strategies in the upcoming semester.  Flipping the classroom makes a lot of sense to me because having the students learn basic concepts through self-direction (with some provided resources) then spending face-to-face time engaging in more complex learning activities seems like sound andragogy.  I wrestle with figuring out how to flip the classroom so I will try to focus on that throughout this reflection.


The phrase “active learner engagement” was really what got me to pay attention to the selected quote.  When I think about great courses I’ve attended, it often involved me being actively engaged.  Engaging students is what I strive to do as much as I possibly can, though I am beginning to realize that I can create even more opportunities to do so and flipping the classroom seems like the next logical progression to maximize student engagement during class time.  The flipped classroom was all the rage in education technology circles in 2013 (Triola and Cook, 2014).  The hype hasn’t totally fizzled out as the flipped classroom has been a staple in my department meeting minutes for the past several years.  Though I’ve heard bits and pieces about what it involves, I’ve never fully investigated the benefits of a flipped classroom, nor how to go about achieving a successful iteration of the flipped classroom.  There are, however, adequate resources at CNA-Q to achieve a flipped classroom, and it’s about time I try.

In a short review of the flipped classroom approach, Leung et al described some of the methods used for third year medical students.  He noted that self-learning materials were developed and made available to students online and that students were expected to access and review the material prior to class.  The class time became dedicated to “answering questions or practicing the application of knowledge in activities designed to further enhance familiarity” (Leung et al, 2014, p. 1127).  By adapting the flipped classroom approach, Leung stated that knowledge transfer for students regarding specific skills was improved.  Another benefit noted by Leung was that instructors updated learning materials more frequently.

Love, Hodge, Grandegenett and Swift evaluated student learning and perceptions in a flipped classroom in an algebra course.  In that study, students in a flipped classroom were encouraged to review course materials, including online screencasts, prior to arriving at class.  During class, time was reserved for “engaging students in organized, interactive, hands-on activities” (Love et al., 2014, p. 320).  This included working through problems on the board in pairs.  In addition, students in the flipped classroom were required to complete a daily readiness assessment prior to entry to the class, which assessed their learning and provided an opportunity to ask for additional explanation of the content.  While the small sample size in this study did not show significant difference in assessment scores between students in the flipped classroom and students in a traditional classroom, the students in the flipped classroom had an overwhelmingly positive review of the class in a post-course survey compared to the students in the traditional classroom. 


It seems clear to me that there are enough benefits of a properly flipped classroom to dedicate the necessary time to prepare at least one of my courses this semester in this manner.  One such course that might suit this format is my hazardous materials management offering, which is scheduled for 3-hours per week of lecture.  Flipping this classroom would require me to develop my lecture notes and references online using our D2L management system.  I can provide one such online lesson per week, in video format, with reference material to read and/or watch.  Students would be provided with review questions in order to facilitate their self-directed learning, with reflective questions being used to help propel discussion at the start of the next class.  One such example, from the Love et al study,  would be, “What did you find difficult or confusing about this section?  If nothing was difficult or confusing, what did you find most interesting?” (Love et al., 2014, p.321).  

The start of the class time can begin with students pairing off, discussing this question between themselves, then sharing the answers with the rest of the class.  That way, the remainder of instructional resources can be dedicated to teaching the students what they really want to know and/or what they struggle with.  The rest of the class time can be used to work through solutions to real-world problems concerning hazardous materials management.  For instance, when teaching about the globally harmonized system (“GHS”), we can spend class time inspecting hazardous materials storage locations for compliance with GHS, rather than me telling them what the requirements are for labels and safety data sheets.  This seems to be a much more effective approach to teaching that is consistent with the principles of andragogy.               


Bierema, L.L., Merriam S.B., (2014). Adult learning linking theory and practice. John Wiley & Sons Inc.

Cook, D.A., Triola, M.M., (2014). What is the role of e-learning? Looking past the hype. Medical Education, 48, John Wiley & Sons Inc., 930-937

Leung, J.Y.C., Kumta, S.M., Jin, Y., Yung, A.L.K., (2014).  Short review of the flipped classroom approach. Medical Education, 48, John Wiley & Sons Inc., 1127

Love, B., Hodge, A., Grandgenett, N., Swift, W., (2014). Student learning and perceptions in a flipped linear algebra course. International Journal of Mathematical Education in Science and Technology, Vol 45, No. 3, 317-324