Once a workshop is completed, there are two grades displayed to a student: grade for submission and grade for assessment.
The grade for submission is calculated from the received peer assessments, which can be inspected by opening the student submission.
The grade for assessment, in turn, is calculated by how accurate the given assessments were in comparison with the mean (I'm not 100% sure on this, but that's what it looks like). However, this cannot be inspected because when the assessed submissions are open, the only assessment that is displayed is the student's. The problem with this is that in a situation where a grade for assessment is not 100%, the student doesn't know which mistakes they made on their assessments.
There should be a way to see the origin of the penalizations in order to maximize the learning experience.