Student-Generated Test Questions

Objective

This is a reflection on a video previously prepared for PIDP 3230, which was published to YouTube on April 10, 2016, by Emma Leigha Munro, titled “Student Generated Test Questions” on the channel called “SUBSCRIBE Thanks :)”.

Reflective

I really enjoyed this video, created using PowToon, which is online software for developing animated presentations.  The video really caught my eye and had me more engaged and interested than many of the videos I watched that were created with screencast-o-matic.  The reason I found this video is because I was initially interested in student-generated test questions, but the link to an example video for this classroom assessment technique (“CAT”) on the PIDP 3230 course page redirected to a page in which the video was not available.  As I was committed to reflecting on this topic, I searched the internet in hopes for another student submission and sure enough, the first video result was Emma Leigha Munro’s YouTube video.  This helped solidify two important ideas for my upcoming video assignment:  try to make the video as engaging as possible with appropriate video, graphics, animations, and audio; and make the video available on YouTube and in turn, the public search record so that it is more likely to be found and hopefully teach someone in the future.

The reason that this particular CAT intrigues me is because of one of the major pros: “when students suggest test questions and try to predict the actual test questions, they are – in effect – beginning to prepare, in useful ways, for the upcoming test” (Angelo & Cross, 1993, p. 243).  This is a logical progression from one of the main principles of my first reflection, which investigated the notion proposed by Fenwick and Parson (2009) that evaluation can be the most vital, permanent part of the learning experience (p. 9).  Therefore, a CAT that helps amplify the learning experience of an upcoming evaluation seems like an incredibly powerful tool.  

 Interpretive

The significance of this CAT for me is the potential for immediate application in one of the courses I am currently teaching.  This is, of course, one of the major motivating factors for adult learning.  

In my toxicology course this semester, I have already committed to employing collaborative testing.  For the first test of the semester, a two-stage model proposed by the Carl Wieman Science Education Initiative (2014) was used.  My thought is that using some class time for the two to three weeks leading up to the next test, the students can collaborate to prepare student-generated test questions, which will make up the collaborative portion of the next test.  This seems to be a way of still incorporating a collaborative portion of the test, though during the evaluation students will complete this portion individually.  The collaboration will happen as part of an informal assessment technique during class time leading up to the formal evaluation.

I have tried using student-generated test questions before, but after reading about this CAT as proposed by Angelo and Cross (1993) there are a number of things that I didn’t quite get right that I will need to improve for this upcoming attempt.  The first and most noticeable shortcoming I had previously was the timing of this CAT.  Angelo and Cross (1993) recommend facilitating this CAT at least two or three weeks prior to a formal evaluation (p. 240-241), whereas previously I had given students only one week.  Secondly, I didn’t really use any kind of procedure for this CAT when I tried it previously – it was carried out rather sloppily and perhaps without enough purpose.  Therefore, the step-by-step procedure proposed by Angelo and Cross (1993) is incredibly helpful.  This is especially true for the third step in the procedure, which requires a detailed explanation to the students of the requirements and benefits of this CAT (p. 242).  Lastly, one of the caveats proposed by Angelo and Cross (1993) is, “do not promise categorically to include them on the test” (p. 243), which is unfortunately what I did previously.    

Therefore, with some slight adjustments, improvements, and adaptations, I think this can be an incredibly useful tool that I am ready, willing and able to incorporate immediately into one of my courses this semester.  

Decisional

My six students are currently working through the major topic of toxicology called absorption.  From the course outline, there are four learning outcomes for this major topic: Explain the process of absorption; Outline the methods of transport across cell membranes; Explain the factors that affect the rate of absorption; and Describe how toxins are absorbed by the skin, the gastrointestinal tract, and the respiratory tract.  

My idea for carrying out this CAT is to focus on these learning outcomes.  I would like students to come up with one type of question for each learning objective.  For instance, each student will submit one multiple choice, one true or false, and one short answer question for each learning objective.  Therefore, each student will submit a total of twelve questions and sample answers for review and feedback.

A handout to each student will be provided, explaining the process of this CAT and the benefits of participating in it.  For this particular activity, students will be assigned a learning partner to collaborate with in the development of the questions.  In this particular case, a sample of well-developed and relevant questions will be used to form the collaborative portion of the upcoming test, which is worth 15% of the test grade.  

My idea is to establish four learning stations in class – one for each of the above-noted learning outcomes.  At each learning station, a handout that provides guidance on developing each type of question will be provided for reference.  Each set of learning partners will be assigned to a particular learning station.  Given approximately twenty minutes at each learning station, the learning partners will prepare one multiple choice question, one true or false question, and one short answer question with sample answers.  The learning partners will rotate through the learning stations until they have generated questions for all of the learning outcomes.  I will need the two-hour block of class to complete this activity.    

 In the next class, I will provide feedback on questions that were not developed in accordance with the guidelines and show examples of some of the questions that were well developed.  Students will be given an opportunity to revise their questions based on the feedback given.  

Finally, I will use an appropriate sample of student-generated test questions to make up the collaborative portion of the upcoming test on absorption, which will account for 15% of the test grade.  Given the fact that the students collaborated to develop the questions, and had further opportunity to discuss the questions they developed and the answers they provided outside of class time, the marks may be slightly inflated.  But considering that this is the collaborative portion of the test, which will only account for 15% of the total test grade, and the class policy is that the collaborative portion of the test cannot lower an individual grade, this is appropriate in this case.  

After all, I am more concerned about using this CAT to amplify the learning experience created by the formal evaluation, which can be the most vital, permanent part of a learning experience.  I feel like if this CAT is well executed then the students have a great opportunity to really learn the material and have it stick in time for the test and beyond.

References

Angelo, T.A., Cross, K.P., (1993).  Classroom assessment techniques: A handbook for college teachers. 2nd Edition. Jossey-Bass, John Wiley & Sons Inc.

Carl Wieman Science Education Initiative, (2014).  Two-stage exams.  Retrieved from http://www.cwsei.ubc.ca/resources/files/Two-stage_Exams.pdf

Fenwick, T.J., Parsons, J., (2009).  The art of evaluation: A resource for educators and trainers. 2nd Edition. Thompson Educational Publishing Inc.

Advertisements