Open main menu

Peer assessment is an educational activity in which students judge the performance of their peers and it can take different forms depending on the characteristics of its implementation, the learners and the learning context.[1] Peer assessment can take different forms ranging from summative purposes (e.g., peer grading, peer evaluation) to formative purposes (e.g., peer feedback).[2] Importantly, a significant effort has been performed by scholars in exploring the different characteristics of peer assessment, usually called taxonomies or constellations. The first one by Topping (1998) already included seventeen characteristics, number that has been increased and rearranged by later publications.[1] Peer assessment has been shown to have positive effects on achievement,[3] and though it has been claimed it could increase self-regulation the empirical evidence supporting this claim is missing at the moment.[4] An important line of research is currently focusing on what are the interpersonal effects of peer assessment,[5] with the latest and more comprehensive review showing that formative approaches might be more beneficial, decreasing negative interpersonal and motivational effects.[6]

Advantages of self and peer assessmentEdit

Saves teachers' timeEdit

Though, for a long time it has been claimed that student grade assignments can save teacher's time,[7] there is evidence showing this might not be the case.[6] A large study with Spanish teachers shown that they do not feel they have to used less time because the implementation of peer assessment, especially if done with formative purposes, takes a considerable amount of teaching instructional help.[8]

Faster feedbackEdit

Having students grade papers in class or assess their peers' oral presentations[9] decreases the time taken for students to receive their feedback. Instead of them having to wait for feedback on their work, self- and peer-assessment allow assignments to be graded soon after completion. Students then don't have to wait until they have moved onto new material and the information is no longer fresh in their minds.[10]

The faster turnaround time of feedback has been also shown to increase the likelihood of adoption by the feedback recipient. A controlled experiment conducted in a Massive Open Online Course (MOOC) setting found that students' final grades improved when feedback was delivered quickly, but not if delayed by 24 hours.[11]

PedagogicalEdit

Teacher's evaluation role makes the students focus more on the grades not seeking feedback.[12] Students can learn from grading the papers[10] or assessing the oral presentations of others.[13] Often, teachers do not go over test answers and give students the chance to learn what they did wrong. Self and peer assessment allow teachers to help students understand the mistakes that they have made. This will improve subsequent work and allow students time to digest information and may lead to better understanding.[14] A study by Sadler and Good found that students who self-graded their tests did better on later tests. The students could see what they had done wrong and were able correct such errors in later assignments. After peer grading, students did not necessarily achieve higher results.[15]

MetacognitiveEdit

Through self- and peer-assessment students are able to see mistakes in their thinking and can correct any problems in future assignments. By grading assignments, students may learn how to complete assignments more accurately and how to improve their test results.[10]

Professors Lin-Agler, Moore, and Zabrucky conducted an experiment in which they found “that students are able to use their previous experience from preparing for and taking a test to help them build a link between their study time allocation.”[16] Students can not only improve their ability to study for a test after participating in self- and peer- assessment but also enhance their ability to evaluate others through improved metacognitive thinking.[17]

AttitudeEdit

If self- and peer-assessment are implemented, students can come to see tests not as punishments but as useful feedback.[17] Hal Malehorn says that by using peer evaluation, classmates can work together for “common intellectual welfare” and that it can create a “cooperative atmosphere” for students instead of one where students compete for grades.[18] In addition, when students assess the works of their fellow students, they also reflect on their own works. This reflective process stimulates action for improvement.[19]

However, in the Supreme Court Case Owasso Independent School District v. Falvo, the school was sued following victimization of an individual after other students learned that he had received a low test score.[20] Malehorn attempts to show what the idealized version of peer-assessment can do for classroom attitude. In practice, situations where students are victimized can result as seen in the Supreme Court Case.

Teacher grading agreementEdit

One concern about self- and peer-assessment is that students may give higher grades than teachers. Teachers want to reduce grading time but not at the cost of losing accuracy.[21]

SupportEdit

A study by Saddler and Good has shown that there is a high level of agreement between grades assigned by teachers and students as long as students are able to understand the teacher's quality requirements. They also report that teacher grading can be more accurate as a result of using self- and peer-assessment. If teachers look at how students grade themselves, then they have more information available from which to assign a more accurate grade.[22]

OppositionEdit

However, Saddler and Good warn that there is some disagreement. They suggest that teachers implement systems to moderate grading by students in order to catch unsatisfactory work.[22] Another study reported that grade inflation did occur as students tended to grade themselves higher than a teacher would have. This would suggest that self- and peer-assessment are not an accurate method of grading due to divergent results.[23]

ComparisonEdit

According to the study by Saddler and Good, students who peer grade tend to undergrade and students who are self graded tend to overgrade. However, a large majority of students do get within 5% of the teacher’s grade. Relatively few self graders undergrade and relatively few peer graders tend to overgrade.[21]

Perhaps one of the most prominent models of peer-assessment can be found in design studios[24][25]. One of the benefits of such studios comes from structured contrasts which can help novices notice differences that might otherwise have been accessible only for experts[26]. In fact, it is a well known strategy for designers to use comparisons to get inspired[27][28]. Some researchers designed systems that support comparative examples to surface helpful comparisons in educational settings[29][30][31]. However, what makes a good comparison remains unclear; the general guidance of good feedback by Sadler describes three characteristics: specific, actionable, and justified[32], and has widely been adopted in feedback research. However, with each piece of work to be evaluated differing so vastly in content, the path towards those qualities in a specific feedback performance remains largely unknown. Effective feedback is not only written actionably, specifically, and in a justified manner, but more importantly, contains good content; good in the sense that it points out relevant things, brings in new insights, and changes the minds of its recipients to consider the problem from a different angle, or re-represent it completely. This requires content-specific customization.

RubricsEdit

PurposeEdit

Students need guidelines to follow before they are able to grade more open ended questions. These often come in the form of rubrics, which lay out different objectives and how much each is worth when grading.[17] Rubrics are often used for writing assignments.[33]

Examples of objectivesEdit

  • Expression of ideas
  • Organization of content
  • Originality
  • Subject knowledge
  • Content
  • Curriculum alignment
  • Balance
  • Voice

Group workEdit

One area in which self- and peer-assessment is being applied is in group projects. Teachers can give projects a final grade but also need to determine what grade each individual in the group deserves. Students can grade their peers and individual grades can be based on these assessments. Nevertheless, there are problems with this grading method as if students grade each other unfairly they can skew the grades.[34]

OvergenerosityEdit

Some students may give all of the other students very high grades which will cause their score to be lower compared to the others. This can be addressed by having students grade themselves and thus their generosity will also extend to themselves and raise their grade by the same amount. However, this does not compensate for students who grade themselves too harshly.[35]

Creative accountingEdit

Some students will award everybody low marks and themselves very high marks in order to bias the data. This can be countered by checking student’s grades and making sure that they are consistent with where in the group their peers graded them.[36]

Individual penalizationEdit

If all of the students go against one student because they feel that the individual did little work, then she or he will receive a very low grade. This is permissible if the student in question really did do very little work, but cases such as this should be monitored closely.[36]

Classroom ParticipationEdit

While it is difficult to grade students on participation in a classroom setting because of its subjective nature, one method of grading participation is to use self- and peer-assessment. Professors Ryan, Marshall, Porter, and Jia conducted an experiment to see if using students to grade participation was effective. They found that there was a difference between a teacher's evaluation of participation and a student's. However, there was no academic significance, indicating that student’s final grades were not affected by the difference in a teacher's evaluation and a student's. They concluded that self- and peer-assessment is an effective way to grade classroom participation.[37]

Peer-assessment at scaleEdit

The peer-assessment mechanism is also the gold-standard in many creative tasks varied from reviewing the quality of scholarly articles or grant proposals to design studios. However, as the number of assessments to be done increases, challenges arise. One is that because no one providing assessment has a global understanding of the entire pool of submissions, local biases in judgment may be introduced (e.g. the range of a scale used to assess may be affected by the pool of submissions the assessor reviews) and noises in the ranking aggregated from individual peer-assessment may be added. On the other hand, because the ranked outcome is of utmost interest in many situations (e.g. allocating research grants to proposals or assigning letter grades to students), ways to systematically aggregate peer-wise assessment to recover the ranked order of submissions has many practical implications.

To tackle this, some researchers studied (1) evaluation schemes (e.g. ordinal grading[38], (2) algorithms to aggregate pairwise evaluation to more robustly estimate the global ranking of submissions[39], and (3) produce more optimal pairs to exchange feedback either by considering conflicts of interest[40] or (4) by modeling a framework that reduces the error between individual- and community-level judgment on the value of a scholarly article[41].

LegalityEdit

The legality of self- and peer-Assessment was challenged in the Supreme Court Case of Owasso Independent School District v. Falvo. Kristja Falvo sued the school district where her son attended school because it used peer-assessment and he was teased about a low score. The teacher's right to use self- and peer-assessment was upheld by the court.[42]

NotesEdit

  1. ^ a b Alqassab, M & Panadero, E. (2018). "Peer assessment." In Brookhart, S. et al. (Eds.) Routledge Encyclopedia of Education. New York: Routledge
  2. ^ Panadero, E., Jonsson, A., & Alqassab, M. (2018). "Providing formative peer feedback: What do we know?" In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge Handbook of Instructional Feedback: Cambridge University Press.
  3. ^ van Zundert, Marjo; Sluijsmans, Dominique; van Merriënboer, Jeroen (August 2010). "Effective peer assessment processes: Research findings and future directions". Learning and Instruction. 20 (4): 270–279. doi:10.1016/j.learninstruc.2009.08.004. ISSN 0959-4752.
  4. ^ Panadero, E., Jonsson, A., & Strijbos, J. W. (2016). "Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation." In D. Laveault & L. Allal (Eds.), Assessment for Learning: Meeting the Challenge of Implementation (pp. 311–326). New York: Springer.
  5. ^ van Gennip, Nanine A.E.; Segers, Mien S.R.; Tillema, Harm H. (January 2009). "Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features". Educational Research Review. 4 (1): 41–54. doi:10.1016/j.edurev.2008.11.002. ISSN 1747-938X.
  6. ^ a b Panadero, E. (2016). "Is it safe? Social, interpersonal, and human effects of peer assessment: A review and future directions." In G. T. L. Brown & L. R. Harris (Eds.), Handbook of Human and Social Conditions in Assessment (pp. 247–266). New York: Routledge.
  7. ^ Searby, Mike, and Tim Ewers An evaluation of the use of peer assessment in higher education: A case study in the School of Music p.371
  8. ^ Panadero, Ernesto; Brown, Gavin T. L. (2017). "Teachers' reasons for using peer assessment: Positive experience predicts use". European Journal of Psychology of Education. 32: 133–156. doi:10.1007/s10212-015-0282-5. hdl:10486/679215.
  9. ^ Ireland, Christopher (2012). "Peer e-assessment of oral presentations". Unpublished. doi:10.13140/RG.2.1.4007.4408. Cite journal requires |journal= (help)
  10. ^ a b c Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.2
  11. ^ Kulkarni, Chinmay E., Michael S. Bernstein, and Scott R. Klemmer. "PeerStudio: rapid peer feedback emphasizes revision and improves performance." Proceedings of the second (2015) ACM conference on learning@ scale. ACM, 2015.
  12. ^ J. Scott Armstrong (2012). "Natural Learning in Higher Education". Encyclopedia of the Sciences of Learning.
  13. ^ Potter, M., English, J. & Ireland, C. (2014) How far can peers go in supporting student learning? A Student’s Perspective. In: 11th ALDinHE Conference: Learning Development Spaces and Places, 14-16 Apr 2014, University of Huddersfield, Huddersfield, United Kingdom. https://www.researchgate.net/publication/301601738_How_far_can_peers_go_in_supporting_student_learning_A_Student%27s_Perspective
  14. ^ Ngar-Fun, Liu, and David Carless Peer feedback: the learning element of peer assessment p.281
  15. ^ Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.24
  16. ^ Lin-Agler, Lin Miao, DeWayne Moore, and Karen M. Zabrucky EFFECTS OF PERSONALITY ON METACOGNITIVE SELF-ASSESSMENTS p.461
  17. ^ a b c Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.3
  18. ^ Malehorn, Hal Ten measures better than grading p.323
  19. ^ Kristanto, Yosep Dwi (2018). "Technology-enhanced pre-instructional peer assessment: Exploring students' perceptions in a Statistical Methods course". REiD (Research and Evaluation in Education). 4 (2): 105–116.
  20. ^ Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.1
  21. ^ a b Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.16
  22. ^ a b Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.23
  23. ^ Strong, Brent, Mark Davis, and Val Hawks SELF-GRADING IN LARGE GENERAL EDUCATION CLASSES p.52
  24. ^ Dannels, Deanna P., and Kelly Norris Martin. "Critiquing critiques: A genre analysis of feedback across novice to expert design studios." Journal of Business and Technical Communication 22.2 (2008): 135-159.
  25. ^ Goldschmidt, Gabriela, Hagay Hochman, and Itay Dafni. "The design studio “crit”: Teacher–student communication." AI EDAM 24.3 (2010): 285-302.
  26. ^ Schwartz, Daniel L., Jessica M. Tsang, and Kristen P. Blair. The ABCs of how we learn: 26 scientifically proven approaches, how they work, and when to use them. WW Norton & Company, 2016.
  27. ^ Herring, Scarlett R., et al. "Getting inspired!: understanding how and why examples are used in creative design practice." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2009.
  28. ^ Newman, Mark W., and James A. Landay. "Sitemaps, storyboards, and specifications: a sketch of Web site design practice." Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, 2000.
  29. ^ Cambre, Julia, Scott Klemmer, and Chinmay Kulkarni. "Juxtapeer: Comparative peer review yields higher quality feedback and promotes deeper reflection." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
  30. ^ Kang, Hyeonsu B., et al. "Paragon: An Online Gallery for Enhancing Design Feedback with Visual Examples." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
  31. ^ Potter, Tiffany, et al. "ComPAIR: A New Online Tool Using Adaptive Comparative Judgement to Support Learning with Peer Feedback." Teaching & Learning Inquiry 5.2 (2017): 89-113.
  32. ^ Sadler, D. Royce. "Formative assessment and the design of instructional systems." Instructional science 18.2 (1989): 119-144.
  33. ^ Andrade, Heidi, and Ying Du Student responses to criteria-referenced self-assessment p.287
  34. ^ Li, Lawrence K. Y. Some Refinements on Peer Assessment of Group Projects p.5
  35. ^ Li, Lawrence K. Y. Some Refinements on Peer Assessment of Group Projects p.8
  36. ^ a b Li, Lawrence K. Y. Some Refinements on Peer Assessment of Group Projects p.9
  37. ^ Ryan, Gina J., et al. Peer, professor and self-evaluation of class participation p.56
  38. ^ Raman, Karthik, and Thorsten Joachims. "Methods for ordinal peer grading." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
  39. ^ Chen, Xi, et al. "Pairwise ranking aggregation in a crowdsourced setting." Proceedings of the sixth ACM international conference on Web search and data mining. ACM, 2013.
  40. ^ Kotturi, Yasmine, et al. "Rising above Conflicts of Interest: Algorithms and Interfaces to Assess Peers Impartially." 2013.
  41. ^ Noothigattu, Ritesh, Nihar B. Shah, and Ariel D. Procaccia. "Choosing How to Choose Papers." arXiv preprint arXiv:1808.09057 (2018).
  42. ^ Sadler, Philip M., and Eddie Good The Impact of Self- and Peer-Grading on Student Learning p.9

ReferencesEdit

  • Andrade, Heidi, and Ying Du "Student responses to criteria-referenced self-assessment." Assessment & Evaluation in Higher Education 32.2 (2007): 159–181.
  • Gopinath, C. "Alternatives to Instructor Assessment of Class Participation." Journal of Education for Business 75.1 (1999): 10.
  • Li, Lawrence K. Y. "Some Refinements on Peer Assessment of Group Projects." Assessment & Evaluation in Higher Education 26.1 (2001): 5–18.
  • Lin-Agler, Lin Miao, DeWayne Moore, and Karen M. Zabrucky "EFFECTS OF PERSONALITY ON METACOGNITIVE SELF-ASSESSMENTS." College Student Journal 38.3 (2004): 453–461.
  • Malehorn, Hal "Ten measures better than grading." Clearing House 67.6 (1994): 323.
  • Mok, Magdalena Mo Ching, et al. "Self-assessment in higher education: experience in using a metacognitive approach in five case studies." Assessment & Evaluation in Higher Education 31.4 (2006): 415–433.
  • Ngar-Fun, Liu, and David Carless "Peer feedback: the learning element of peer assessment." Teaching in Higher Education 11.3 (2006): 279–290.
  • Ryan, Gina J., et al. "Peer, professor and self-evaluation of class participation." Active Learning in Higher Education 8.1 (2007): 49–61.
  • Sadler, Philip M., and Eddie Good "The Impact of Self- and Peer-Grading on Student Learning." Educational Assessment 11.1 (2006): 1–31.
  • Searby, Mike, and Tim Ewers "An evaluation of the use of peer assessment in higher education: A case study in the School of Music" Assessment & Evaluation in Higher Education 22.4 (1997): 371.
  • Strong, Brent, Mark Davis, and Val Hawks "SELF-GRADING IN LARGE GENERAL EDUCATION CLASSES." College Teaching 52.2 (2004): 52–57.
  • van den Berg, Ineke, Wilfried Admiraal, and Albert Pilot "Peer assessment in university teaching: evaluating seven course designs." Assessment & Evaluation in Higher Education 31.1 (2006): 19–36.
  • Raman, Karthik, and Thorsten Joachims. "Methods for ordinal peer grading." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
  • Chen, Xi, et al. "Pairwise ranking aggregation in a crowdsourced setting." Proceedings of the sixth ACM international conference on Web search and data mining. ACM, 2013.
  • Kotturi, Yasmine, et al. "Rising above Conflicts of Interest: Algorithms and Interfaces to Assess Peers Impartially." 2013.
  • Noothigattu, Ritesh, Nihar B. Shah, and Ariel D. Procaccia. "Choosing How to Choose Papers." arXiv preprint arXiv:1808.09057 (2018).
  • Kulkarni, Chinmay E., Michael S. Bernstein, and Scott R. Klemmer. "PeerStudio: rapid peer feedback emphasizes revision and improves performance." Proceedings of the second (2015) ACM conference on learning@ scale. ACM, 2015.
  • Dannels, Deanna P., and Kelly Norris Martin. "Critiquing critiques: A genre analysis of feedback across novice to expert design studios." Journal of Business and Technical Communication 22.2 (2008): 135-159.
  • Schwartz, Daniel L., Jessica M. Tsang, and Kristen P. Blair. The ABCs of how we learn: 26 scientifically proven approaches, how they work, and when to use them. WW Norton & Company, 2016.
  • Herring, Scarlett R., et al. "Getting inspired!: understanding how and why examples are used in creative design practice." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2009.
  • Newman, Mark W., and James A. Landay. "Sitemaps, storyboards, and specifications: a sketch of Web site design practice." Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, 2000.
  • Cambre, Julia, Scott Klemmer, and Chinmay Kulkarni. "Juxtapeer: Comparative peer review yields higher quality feedback and promotes deeper reflection." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
  • Kang, Hyeonsu B., et al. "Paragon: An Online Gallery for Enhancing Design Feedback with Visual Examples." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
  • Potter, Tiffany, et al. "ComPAIR: A New Online Tool Using Adaptive Comparative Judgement to Support Learning with Peer Feedback." Teaching & Learning Inquiry 5.2 (2017): 89-113.
  • Sadler, D. Royce. "Formative assessment and the design of instructional systems." Instructional science 18.2 (1989): 119-144.
  • Goldschmidt, Gabriela, Hagay Hochman, and Itay Dafni. "The design studio “crit”: Teacher–student communication." AI EDAM 24.3 (2010): 285-302.


Further readingEdit