This portfolio is part of the UNL Peer Review of Teaching program. |
|
Bernstein Project: Addition of Web Questions Outside of Class
This course is based on conventional readings (some textbook material and some reprinted journal articles). Students typically prepare answers to study questions provided for the readings, and class time is spent having students offer their answers and respond to comments from me and from other students. Many of these questions have to do with recognizing or remembering various detailed components of the reading, for there is reason to believe that a higher order understanding is possible only when someone is fluent in the basic constructs and phenomena in a topic area. It occurred to me that it would be a better use of class time for me to go over more complex examples like those on examinations; it is not an interesting or difficult task to lead the class in oral identification of the answers to reasonably straightforward questions that directly assess comprehension of the reading assignment. Accordingly I used a website to move that activity outside class time, freeing more time for the advanced discussion that only I could provide. The web program (developed in the UNL math department) allows me to create any number of topics for each reading assignment, writing 4-6 specific multiple choice questions per topic. The program requires each learner to answer at least three questions per topic correctly; it gives immediate feedback and recycles questions until the criterion is met. This mastery preparation of the specific material should enable students to think more easily about the higher order questions they are given during class time and on examinations. Class members readily used the website throughout the term, reporting both during and after the semester that it was extraordinarily useful and valuable to them. I have never had a better prepared class, and every day there were excellent and informed comments in class. The quality of the class time was profoundly improved merely by having virtually everyone in class familiar with the reading assigned for the day, and whatever time was involved in building the web questions was more than compensated by the incredible energy of class discussions. Moreover, those discussions were centered on more integrative material than ever before, allowing the most advanced discussions of the reading that I have seen in 15 years. On the other hand, performance on examinations did not improve; if anything it became slightly worse. There was no decrease in the number of students performing in the 90% range, but there were fewer in the 70s and 80s and more in the <=69 category. The difference was not huge, but any change was certainly in the wrong direction. I have had plenty of discussions with colleagues about the possible reasons for this negative result. The emerging consensus is that there is too large a difference between the format of the web questions and the format of the examination questions. For a technological tool to have a direct impact on learning measured by essay exams, the questions would need to give students the opportunity to practice recognition or differentiation of text samples that approximate the more complex answers to be given on examinations. The present questions do a credible job of demonstrating familiarity with the basic elements in the reading, but they do not provide either practice or feedback on the integrative tasks used on exams. I will teach the same course in the fall semester (for a younger, less experienced group of students), and I plan to add to the web questions a set of topics that more closely resemble the kind of generalized understanding that is found on exams. It remains to be seen if this level of technology can be useful in improving student performance on exam questions that measure generalized application of an idea to new contexts. |
||
Peer Review of Teaching * 214 Burnett Hall * University of Nebraska-Lincoln * Lincoln, NE * 68588-0308 Ph. 402 472-1753 * E-mail PeerReview@UNL.edu |