Interrater reliability in the scoring of students' written expression
Reforming assessment systems to measure better the extent to which students are learning the national and state content and performance standards has presented new challenges for educators. Despite the challenges, reforming assessment systems is essential to the success of the reform efforts. In the area of written expression rater subjectivity could impact the rating of the student's performance and invariably, the rating of the schools in the accountability scheme. This study was designed to ascertain the level of usage of the writing rubric in classroom assessments, analyze the effect of training in the use of analytic evaluation procedures on raters' scoring reliability, compute the alpha reliability coefficients when a selected analytic evaluation procedure was utilized among 100 selected teacher raters to score 20 writing samples by third grade students, and analyze the effect, if any, that selected rater characteristics had on the rating of writing samples.