|Published online: June 14, 2017||$US5.00|
The purpose of this study was to investigate how thirty elementary pre-service teachers (PSTs) graded fifty-six seventh grade students’ social studies essays before and after a rubric-norming workshop. There were no differences in reliability across PSTs’ pre- and post-training; however, less error was associated with pre-training scores. Inflated reliability measures ( = .97) and student survey responses indicated the simplistic school-adopted rubric lacked adequate categorical descriptors as an analytic rubric. PSTs’ descriptions of their grading processes prior to any class discussion or training about how to score essays was remarkably consistent with recommended best practices for using analytical rubrics. The dissonance between qualitative results and quantitative analysis is likely due to problems associated with the simplistic nature of the rubric that did not differentiate among scores for categories, scorers, or training.
|Keywords:||Rubric, Writing Assessment, Teacher Education|
Associate Professor, School of Education, Northern Michigan University, Marquette, Michigan, USA
Assistant Professor, Teacher Education, St. Norbert College, Green Bay, Wisconsin, USA
Middle School Teacher, English Department, Ishpeming Public Schools, Ishpeming, Michigan, USA