The Effects of a Rubric-Norming Workshop on the Variability of Elementary Pre-Service Teachers’ Scoring Practices

By Derek Anderson, Sharon Bohjanen and Jennifer Klipp.

Published by The International Journal of Assessment and Evaluation

Format Price
Article: Print $US10.00
Published online: June 14, 2017 $US5.00

The purpose of this study was to investigate how thirty elementary pre-service teachers (PSTs) graded fifty-six seventh grade students’ social studies essays before and after a rubric-norming workshop. There were no differences in reliability across PSTs’ pre- and post-training; however, less error was associated with pre-training scores. Inflated reliability measures ( = .97) and student survey responses indicated the simplistic school-adopted rubric lacked adequate categorical descriptors as an analytic rubric. PSTs’ descriptions of their grading processes prior to any class discussion or training about how to score essays was remarkably consistent with recommended best practices for using analytical rubrics. The dissonance between qualitative results and quantitative analysis is likely due to problems associated with the simplistic nature of the rubric that did not differentiate among scores for categories, scorers, or training.

Keywords: Rubric, Writing Assessment, Teacher Education

The International Journal of Assessment and Evaluation, Volume 24, Issue 2, pp.33-49. Article: Print (Spiral Bound). Published online: June 14, 2017 (Article: Electronic (PDF File; 653.115KB)).

Dr. Derek Anderson

Associate Professor, School of Education, Northern Michigan University, Marquette, Michigan, USA

Dr. Sharon Bohjanen

Assistant Professor, Teacher Education, St. Norbert College, Green Bay, Wisconsin, USA

Jennifer Klipp

Middle School Teacher, English Department, Ishpeming Public Schools, Ishpeming, Michigan, USA