- #1
- 16,024
- 2,673
I'm curious if anyone has tried "ungrading" in their STEM courses and if so, how it worked out. The idea is to get away from the using points to determine a student's grade and use different types of assessment that better motivate students to learn.
https://www.jessestommel.com/why-i-dont-grade/
https://www.chemedx.org/blog/ungrading-what-it-and-why-should-we-use-it
My interest arises from my experiences since classes went remote because of the pandemic. Like many other instructors, I saw the mysterious increase in performance by many students on exams (as well as obvious signs of cheating in some cases). To reduce the incentive to cheat, I replaced most of these high-stakes assignments with low-stakes weekly problems, where students had to write up a solution where they had to identify the relevant physical concepts, explain their problem-solving strategy, and finally solve the problem. It wasn't enough to just write down a bunch of math, which they could easily find on Chegg or somewhere else on the internet; they actually had to articulate the reasoning involved. I have to say I was pleasantly surprised by the quality of the write-ups from some of my students.
There were some problems, however. The main thing was assessment. I developed a rubric, but then it would sometimes end up resulting in a grade I didn't feel accurately reflected the quality of the work. Over time, I've modified the rubric, but I've never been happy with the results. This semester, I'm considering just giving them scores of "satisfactory," "needs revision," and "not submitted," and record audio feedback on what I thought they did well, what could use improvement, etc. I'm still thinking about how to translate these results into a letter grade that I have to assign at the end of the semester.
Anyway, I would love to hear any comments or idea, tips, and about anyone's (student or faculty) experiences with these types of assessments.
https://www.jessestommel.com/why-i-dont-grade/
https://www.chemedx.org/blog/ungrading-what-it-and-why-should-we-use-it
My interest arises from my experiences since classes went remote because of the pandemic. Like many other instructors, I saw the mysterious increase in performance by many students on exams (as well as obvious signs of cheating in some cases). To reduce the incentive to cheat, I replaced most of these high-stakes assignments with low-stakes weekly problems, where students had to write up a solution where they had to identify the relevant physical concepts, explain their problem-solving strategy, and finally solve the problem. It wasn't enough to just write down a bunch of math, which they could easily find on Chegg or somewhere else on the internet; they actually had to articulate the reasoning involved. I have to say I was pleasantly surprised by the quality of the write-ups from some of my students.
There were some problems, however. The main thing was assessment. I developed a rubric, but then it would sometimes end up resulting in a grade I didn't feel accurately reflected the quality of the work. Over time, I've modified the rubric, but I've never been happy with the results. This semester, I'm considering just giving them scores of "satisfactory," "needs revision," and "not submitted," and record audio feedback on what I thought they did well, what could use improvement, etc. I'm still thinking about how to translate these results into a letter grade that I have to assign at the end of the semester.
Anyway, I would love to hear any comments or idea, tips, and about anyone's (student or faculty) experiences with these types of assessments.