Analytic Rubric Format: How Category Position Affects Rater Scoring

Presented at: MwALT 2017

Rating has an impact on writing test reliability and validity. When using analytic rubrics, raters are expected to allocate equal attention to each rubric criterion (Barkaoui, 2007; Weigle, 2002). However, unequal category attention coincides with category reliability (Winke & Lim, 2015), and, therefore, overall test reliability. Nevertheless, the effects of rubric format (i.e., category order) on rater attention and scoring has not been investigated.

In this study, I investigate whether rubric format (i.e. category position) affects how raters score each category. In a within-subjects counter-balanced design, 31 novice raters were randomly assigned to two groups and were trained on two rubrics in two rounds. The analytic rubrics were identical in content, but varied only in the order in which the categories appeared. In Round 1, raters trained on one rubric and rated 20 essays. Five weeks later, in Round 2, raters trained on the alternate rubric and re-rated the 20 essays. I performed Rasch analysis to uncover raters’ scoring consistency and severity for each rubric category. Results show that the raters’ scoring behavior pertaining to the outer-most positions (e.g., left-most and right-most) was most susceptible to ordering effects. That is, the category position affected how severely raters scored a given category. There was also evidence of an interplay between category type and category position, resulting in halo effects in category scores, which aligned with the alternating rubric formats. Based on these findings, I discuss rater training, rubric design, and test-construct considerations for rubric designers and test developers.