Note from Jeremy: This is the third and final guest post in this series by John DeLong.
Our experiment running independent projects in a large enrollment Ecology and Evolution class is now complete. Check back to Part I [link] and Part II [link] of this series if you would like to catch up on the motivation and origin of this experiment. The short version is that we wanted to replace all of the canned lab activities with student-centered activities throughout, from statistics modules to actual projects. I recently learned that what we are doing is a type of CURE (course-based undergraduate research experience), if that connects any dots for you. The hope was to give students a more authentic scientific experience, from initial ideas through to experimental design, trouble-shooting, and discovery.
In this third and final post, I will attempt to answer three questions about the overall experience: 1) What are the things that went wrong (and how we plan to fix them), 2) How the grading worked out, and 3) Would I advocate for broader adoption of this approach?
What are the things that went wrong?
Most of the problems we faced this semester were things that cut into the time available for students to work on their projects. The main issues were: 1) Some types of organisms that we attempted to order for students were very slow in shipping, either due to supplier issues or limitations on shipping living things during the winter. Our fix is to generate a list of organisms to avoid and a list of organisms that ship quickly, and encourage students to choose from the latter. Also, we will have some of the more popular organisms on hand at the beginning of the semester. 2) Our mini-projects module went faster than expected, creating an unnecessary delay in starting independent projects. We will shorten this module to two weeks and move the start date on independent projects forward. 3) Far more students than expected wanted to conduct experiments with plants indoors. These experiments created demand for lighted space that exceeded our capacity, and we underestimated the amount of soil and pots to have on hand. We will now create a sign-up sheet for a maximum number of lighted space projects that we can accommodate. 4) Numerous students conducted their experiments in faculty labs outside of the course space. Although this was great, it led to some excessive use of faculty time when students went down that path late in the game and asked faculty to provide tours or run-downs of how to work in their lab after they had already done so. In the future, we will have sign-ups for this too and cut-off dates for getting involved with outside labs. 5) Our TAs are all very different people and employed different approaches to providing feedback and assistance. We will in the future do more ‘calibration’ (as one student called it) to try to align expectations for TA feedback across the sections. Although we still want TAs to put as much onus as possible on the students to solve their own problems, we will encourage them to use a similar set of approaches to facilitate that, including employing a more ‘leading questions’ skill set that will be common to all TAs.
How did the grading work out?
The goal of this new lab format is ‘to gain a deeper understanding of the scientific process and to improve your ability to use scientific tools’. This goal is inherently qualitative, so traditional grading does not transfer well. Instead, we used a sort of ‘yardstick’ approach where students moved through goals and tasks, getting experience with the process of science, and as they did so they earned ‘checkoffs’ that served as earned points. Thus, we equated doing science more with learning more about science, and any student that got through all of the steps got all of their points, even though they may have learned different things and different amounts of things along the way.
The biggest source of student grade variation came from writing. Students wrote papers about their projects, TAs then marked the papers up, and students revised them to earn ‘improvement’ points. The trick was that in keeping with the course philosophy of keeping everything student-centered, the TAs did not edit or rewrite the papers. They simply indicated that there was a grammar problem here or a technical problem there, and the students themselves had to figure out how to fix them. The improvement part of their paper grade then came from the fraction of problems that the student fixed. It probably goes without saying, but this was one thing the students overall did not care for. Of course, it is more work for the students this way and much harder than simply changing their paper in accordance with the TAs suggestions, but this approach puts the student back in the driver seat, where they can’t avoid thinking about why and how some aspect of their writing could be improved.
Would I advocate for broader adoption of this approach?
I asked a set of clicker questions on the last day of lecture, when students from all of the lab sections were together, to assess student perceptions of the course. When I asked whether they did in fact ‘gain a deeper understanding of the scientific process’ and whether they did improve their ‘ability to use scientific tools’, 85% of them said yes. The 15% who said ‘no’ would not explain why they said that. When I asked them about their reaction when they first heard about the new approach, only 19% were enthusiastic about it, but at the end, 53% said they were more positive about it than they were in the beginning. A full 12% were more negative about the approach at the end, but it’s also interesting that about a third (32%) said their opinion was influenced by how well their project worked out and that the same fraction of students who were more negative also thought their projects were ‘epic fails’ (13%) (although these are not necessarily the same people). Most student projects worked well enough or better than they expected. Finally, 72% said they would remember their project and results more than they would the canned labs they have done, and 85% said they would be better prepared for new research projects in the future.
So the short answer is yes. I do advocate for broader adoption of this approach. My sense is that the independent projects approach was effective in achieving our goal of getting students more engaged in science, and although it was a challenge for some students – and perhaps even frustrating or upsetting to a few – that the majority of students got out of it what we hoped they would. I cannot compare quantitatively student outcomes between this version of the lab and our previous versions, but my sense is this approach is a big improvement. It helps that we also clarify what student outcomes we hope to achieve, rather than mix up content and process the way we used to. With our planned fixes to improve logistics and timing, I expect more students will be OK with the outcomes and move from the ‘my project didn’t work and therefore I don’t like the lab’ place to a place where at least they are positive about their project.
The reason I think this type of lab should work is that the sense of ownership students develop when doing their own thing drives investment of time and creativity in their projects, motivates trouble shooting, and gives students a reason to care about the outcome and what the results mean. We really did not often observe students caring about or taking ownership of projects when the course ran canned labs. Without that sense of ownership, even the coolest of canned labs (including some pretty decent labs that we used to run) are going to fall flat, with students stuck in the perennial mindset of doing the minimum and trying to get out of lab as quickly as possible.
Figuring out how to make the shift was the real trick. What we did was switch from an entrenched canned lab format to a new format where we shut down the exercises that my colleagues built over the last decade, completely reoriented the grading approach from content to benchmarks, and released 100 20-somethings to do whatever throughout the building. Making this switch required believing that it would be so much better than before that whatever went wrong would be worth it. So yeah, I had to believe in it and sell it. Hearing the numerous students express to me (directly and through the TAs) how much they enjoyed the course and that they got a lot out of it has certainly cemented my viewpoint that this is indeed a better way. Maybe I have confirmation bias, but I am also thinking about the dozen students for whom we need to do a better job next time. Furthermore, no one – from colleagues to TAs to students – has articulated a reason why this isn’t the better way.
There are risks inherent to attempting this type of course. First, it might not go very well, either because of the chaos of having so many concurrent projects, because of space constraints, or because of low student buy-in. I managed my worries about these risks by just being OK with having so many projects going on everywhere and viewing the chaos and the student struggles as key features of the learning process. In other words, I didn’t try to save anyone’s project. I also encouraged students to go where they would to find space and assistance, diminishing the effect of our space constraints on student projects. And finally, I personally went to the lecture and then to each lab section again to explain in detail what we were trying to do and why, and how exactly our expectations were different from their previous lab experiences, to achieve as high a level of buy-in from them as I possibly could. Second, you might not get support from colleagues who are not ready to let go of the old labs. For this risk, I just made an effort to talk to anyone who would listen about the idea, and in my case, I found considerable support throughout my department for the endeavor (for which I am very thankful), so I didn’t worry too much. Still, I am unsure if everyone actually supports grading on process versus content.
To wrap up, I want to thank our former director John Osterman for supporting this effort and allowing me to focus on this as my official teaching assignment. Thanks to the amazing TAs (Maria Goller, Ben Reed, Miranda Salsbery, and Laura Vander-Meiden) and the course main instructor Kate Lyons, who were game for a radical lab overhaul and who capably handled the rollout in all its chaotic glory. Thanks to Jeremy Fox for feedback on these blogs and hosting them on Dynamic Ecology. Our materials are available for anyone who would like to use them or to see how we put it together.