Recently, a few members of the College Writing Committee met to discuss the rubrics we currently use for assessment in ENG 131, 132, and 135. Turns out, we’re all doing something different. Some of us use a single-point rubric, some of us have coded feedback (F3=margins aren’t set to 1″ all the way around, refers student to the section on MLA formatting in handbook), some of us believe that rubrics are “a tool of Satan,” and some of us are somewhere in between. After a passionate discussion, we settled on the need for some kind of rubric for in-class essays that communicates standards by which we will evaluate writing, given that the in-class essay is high-stakes: if a student does not pass one of two in-class essays, they are ineligible to earn a passing grade in the course. For what it’s worth, I dislike this requirement, but I understand that it stems from a history of students not writing their own papers and/or not being able to write at a college-level, and as probationary faculty, I’ll play by the rules (for now).
So! A rubric! But what should this rubric contain? What are we measuring? Is it possible to create a rubric that can be used universally, across any type of assignment? What if I’m having my students write an in-class narrative, but my colleague is having her students write a rhetorical analysis of an advertisement? Certainly, these types of writing have different criteria: a narrative will be descriptive, will likely contain very personal details, and may not follow conventional paragraph-by-paragraph development. A rhetorical analysis will also be descriptive, but for a different purpose – it is descriptive in order to evaluate the strategies used to communicate a specific message and will focus on claims and visual evidence. We liked the single-point rubric in that it is adaptable and therefore can be modified to fit unique assignments, but deciding on base criteria has proven to be a complicated task.
Rubrics have been criticized for limiting the creative expression of student writers, turning the focus away from learning and toward standardized, externally validated notions of what “good writing” looks like. Bonus when that external validation comes from a rubric you can purchase from some big publisher who profits from this myth. In Maja Wilson’s book, Rethinking Rubrics in Writing Instruction, Alfie Kohn explains in his Forward that “research shows three reliable effects when students are graded: They tend to think less deeply, avoid taking risks, and lose interest in the learning itself” (xi). What I found even more striking was Wilson’s comparison of “best practices” (often part of the conversation on assessment and evaluation) to the medical model that looks only those methods that can be proven through clinical trial. She states the following: “Unfortunately, children are not bacteria to be obliterated by the correct dose of penicillin, and classes are not control groups whose every variable can be isolated” (xxii). Good teachers are attentive, and attentive teachers recognize that writing “errors” have to be looked at in context. Wilson gives the example of a student who had bounced from school to school because her parents were migrant workers and she split her time across states. Wilson goes on to explain that “The products and processes of progress we assume to be the inevitable incarnations of human genius are often shaped by powerful and not always benign social and economic forces” (11). In other words, rubrics may do more harm than good, because they don’t account for the multitude of forces acting upon our students and each individual writing task.
But, we decided as a committee that we need a rubric. And maybe that rubric can be used to help instructors identify the outcomes they are aiming for in a particular assignment. Great. Handing this rubric to students, though, becomes a game of point-seeking and grade-grubbing, as students no longer write to make meaning, but to receive a grade (and those grades have some serious power personally, professionally, and financially, at a higher education institution). I don’t think we’re off-base to want criteria, nor do I think it is unreasonable to position the teacher as an expert reader who will evaluate a piece of writing and provide feedback. But I wonder if it is possible to create a rubric that can communicate in the same “revise and resubmit” style of academic writing, and therefore would encourage learning, risk-taking, and meaning-making, rather than one that lays out a very narrow path to follow in order to earn a passing grade.
The Framework for Success in Postsecondary Writing is one resource we called upon for a less prescriptive approach to writing assessment. They argue that “Standardized writing curricula or assessment instruments that emphasize formulaic writing for nonauthentic audiences will not reinforce the habits of mind and the experiences necessary for success as students encounter the writing demands of postsecondary education.” The term “assessment” only appears in the full document twice: once in that statement, and once again under composing with electronic technologies where the authors note that students should have opportunities “to explore and develop criteria for assessing the texts.” The word “evaluate” appears three times: once under the habit of creativity, to say that students should be able to evaluate the effects or consequences of their creative choices, and twice in terms of source work. The word “grade” appears 0 times.
The habits of mind referred to in that statement above are curiosity, openness, engagement, persistence, responsibility, flexibility, and metacognition. What if, in order to pass a composition course, students needed to demonstrate these habits of mind? Would good writing follow? Not necessarily; I can think of one student this semester who consistently demonstrates these habits but whose writing would not yet be considered academically successful. With these habits, though, she has less ground to cover in order to get there. The Framework also identifies “experiences” that writing instructors can create for students in order to foster these habits. These include rhetorical knowledge, critical thinking, writing processes, knowledge of conventions, and the ability to compose in multiple environments. One single assignment cannot address all of these experiences, but I think we can draw from them to develop criteria for a rubric.
First and foremost, we need to follow Wilson’s advice that good writing is “more than the sum of its rubricized parts” (xv). The rubric that we create should reflect the same freedom and flexibility in pedagogy that we aim for in our invitations for students to write for us, for their peers, and for the public. The “revise and resubmit” approach that I use mirrors that of academic writing; it is the same model used by academic journals who either accept submissions outright, return with a revise and resubmit notice, or reject based on their subjective criteria. Subjective assessment doesn’t use terms like “organization” or “voice” or “thesis statement.” Instead, it looks past the “outward markers” of good writing and relies on “the complex nature of writing, reading, and response” (75). In order to teach writing, I need to read and respond to it as a reader, not simply assign, correct, and return it. Therefore, the criteria we come up with needs to make space for transparent, subjective wording that reminds students of the importance of flexibility as they write for diverse audiences and varied purposes. Responding and evaluating student writing as if we were neutral, objective, robotic, clinical, unaffected beings ignores the very reason we write: to connect with another person (or persons), whether that writing is utilitarian in purpose or expressive.
What criteria, then, should we use? I’m having a hard time answering this question because there are many ways of making meaning in response to a particular assignment. However, I also recognize that sometimes a writing task is straightforward, and when a piece of writing doesn’t respond to the prompt or follow the conventions for a specific rhetorical situation, it’s okay to call it out. For the ad analysis I just assigned, I’m looking to see that my students can describe an advertisement, discuss what it is doing (and point to visual evidence in the ad or historical context found through research), and explain why that matters. I’m looking to see that their essay is focused on a central idea and that I can pick out the main points in their paragraphs. I want their ideas to build in strength and power as they add layers to their argument. If a student turned in an ad analysis that was only a single paragraph or two in length that did not provide sufficient analysis, it would be returned with a request for revision and no credit would be given until they were able to complete the assignment successfully. For in-class essays, though, there isn’t the opportunity to revise.
I have two proposals:
- We design a rubric using a brief list of characteristics of college-level writing (central focus, adequate and engaging development of ideas, genre or assignment conventions), and rather than assigning points to the in-class essays, we simply pass or fail them. We could recommend that these criteria are roughly weighted 40%, 40%, and 20%. This approach would allow individual instructors to clarify what each criteria would look like specific to that assignment (i.e.: the conventions for a narrative essay are different from a book review or business memo), would emphasize content over correctness, and is simple enough that an in-class essay (read: nobody’s best work) would have minimal consequences unless a student had plagiarized or was writing far below a college-level.
We don’t use a rubric at all, and use a pass/fail approach (because that’s ultimately what it is for these in-class essays even when points are attached) and instead allow individual instructors to decide what counts as “passing.” The catch here is that some instructors might decide that ESWE is the only criteria thatmatters , and…
Never mind. I have one proposal, with the three criteria I listed above. I think we might be making a mistake by calling this assignment and the rubric discussion that has ensued assessment in the first place, per Wilson: “As long as grades or other forms of ranking are the ultimate goal of writing assessment, we will not truly be able to claim assessment for teaching and learning” (87). Further, as one colleague explained, assessment is about feedback. Evaluation is about performance. The in-class essay is a test of performance. It is evaluated, not assessed. I realize that assessment has significant value in terms of accreditation and the HLC (and maybe they could use some clarification on assessment vs evaluation, too), but perhaps then we should focus our time and attention on actual assessment and the ways in which our feedback for students is considered and applied and reconsidered in appropriate contexts rather than debating what criteria to use for a gatekeeping assignment.
I’m not opposed to setting an achievement bar. I think expectations are necessary, just as the editors of LAJM did when I submitted my article, and as the readers for 4C’s proposals do every year when putting together a conference program. Their expectations are subjective, but also peer-reviewed – there are agreed upon standards applicable to that particular context. I think that the achievement bar should be set high. I also think that we need to help our students get there, and we need to be transparent about those expectations. I worry, however, that we’re spending too much time on a task that prevents them from succeeding before they’ve even had a chance to try.