Dont write, just mark: the validity of assessing student ability via their computerized peer-marking of an essay rather than their creation of an essay
Full recordShow full item record
AbstractThis paper reports on a case study that evaluates the validity of assessing students via a computerized peer-marking process, rather than on their production of an essay in a particular subject area. The study assesses the higher-order skills shown by a student in marking and providing consistent feedback on an essay. In order to evaluate the suitability of this method of assessment in judging a students ability, their results in performing this peer-marking process are correlated against their results in a number of computerized multiple-choice exercises and also the production of an essay in a cognate area of the subject being undertaken. The results overall show a correlation of the expected results in all three areas of assessment being undertaken, rated by the final grades of the students undertaking the assessment. The results produced by quantifying the quality of the marking and commenting of the students is found to map well to the overall expectations of the results produced for the cohort of students. It is also shown that the higher performing students achieve a greater improvement in their overall marks by performing the marking process than those students of a lower quality. This appears to support previous claims that awarding a 'mark for marking' rewards the demonstration of higher order skills of assessment. Finally, note is made of the impact that such an assessment method can have upon eradicating the possibility of plagiarism.
Davies, Phil (2004) Dont write, just mark: the validity of assessing student ability via their computerized peer-marking of an essay rather than their creation of an essay. Association for Learning Technology Journal, 12 (3). pp. 261-277. ISSN 0968-7769 (print)/1741-1629 (online)