Moodle
  1. Moodle
  2. MDL-31065

Quiz statistics do not show shortanswer/numerical responses that did not match

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 2.1.4, 2.2.1, 2.3
    • Fix Version/s: 2.1.5, 2.2.2
    • Component/s: Questions
    • Labels:
    • Testing Instructions:
      Hide

      0. Run all unit tests in /question.

      1. Create the following questions:

      • a shortanswer question with right answer 'frog',
      • and a numerical question with right answer 42.
      • a calculated question with just a right answer.
      • a calculatedsimple question with just a right answer.
      • a numerical question with right answer 42, and multichoice units.

      2. Add the questions to a quiz, and attempt it as two different students, one who gets it all right, and one who enters wrong answers for each question.

      3. As teacher, go to the quiz statistics report, and go into the detailed analysis for each question. Look at the response analysis, and verify that both the right and wrong answers appear.

      Show
      0. Run all unit tests in /question. 1. Create the following questions: a shortanswer question with right answer 'frog', and a numerical question with right answer 42. a calculated question with just a right answer. a calculatedsimple question with just a right answer. a numerical question with right answer 42, and multichoice units. 2. Add the questions to a quiz, and attempt it as two different students, one who gets it all right, and one who enters wrong answers for each question. 3. As teacher, go to the quiz statistics report, and go into the detailed analysis for each question. Look at the response analysis, and verify that both the right and wrong answers appear.
    • Affected Branches:
      MOODLE_21_STABLE, MOODLE_22_STABLE, MOODLE_23_STABLE
    • Fixed Branches:
      MOODLE_21_STABLE, MOODLE_22_STABLE
    • Pull from Repository:
    • Pull Master Branch:

      Description

      See http://moodle.org/mod/forum/discuss.php?d=174627

      This affects shortanswer, numerical, calculated and calculatedmulti.

      Student responses that do not match any teacher-supplied pattern are no included in the response analysis in the quiz statistics report. They should be listed as "(did not match any model answer)" or something.

      (It is good practice to always include a '*' answer to these question types, to give feedback to students who got the question completely wrong, which is why our testing has not caught this before!)

        Gliffy Diagrams

          Issue Links

            Activity

            Tim Hunt created issue -
            Tim Hunt made changes -
            Field Original Value New Value
            Fix Version/s STABLE backlog [ 10463 ]
            Labels triaged
            Hide
            Joseph Rézeau added a comment -

            This is a more general problem as it affects ALL question types (also multiple choice, match and any "open-answer" 3rd-party question types such as PREG and REGEXP).

            Actually, in the quiz report page, the section entitled "Analysis of responses" just means "Analysis of correct reponses", since it does not report at all the incorrect responses (to any question type) entered by the students, unlike the full Response history displayed on each student's individual Review of attempt page.

            If we agree that the "Analysis of responses" section of statistics should include the breakdown of incorrect responses alongwith the correct ones, and I think this would be quite useful to teachers who want to a) understand their students errors and b) improve their quizzes, then things might look like the attached example, for an MCQ question.
            Obviously, the statistics figures would have to take this into account, at least for the Frequency column.
            One possibility would be to display the stats for correct responses in black (as now) and the stats for incorrect ones in red...

            I volunteer to create a branch on my moodle 2.2 github to study the feasibility of this new feature.

            Joseph

            Show
            Joseph Rézeau added a comment - This is a more general problem as it affects ALL question types (also multiple choice, match and any "open-answer" 3rd-party question types such as PREG and REGEXP). Actually, in the quiz report page, the section entitled "Analysis of responses" just means "Analysis of correct reponses", since it does not report at all the incorrect responses (to any question type) entered by the students, unlike the full Response history displayed on each student's individual Review of attempt page. If we agree that the "Analysis of responses" section of statistics should include the breakdown of incorrect responses alongwith the correct ones, and I think this would be quite useful to teachers who want to a) understand their students errors and b) improve their quizzes, then things might look like the attached example, for an MCQ question. Obviously, the statistics figures would have to take this into account, at least for the Frequency column. One possibility would be to display the stats for correct responses in black (as now) and the stats for incorrect ones in red... I volunteer to create a branch on my moodle 2.2 github to study the feasibility of this new feature. Joseph
            Hide
            Joseph Rézeau added a comment -

            #1 shows actual "Review of attempt" for one student, including the "Response History".
            #2 shows actual "Analysis of (correct) responses" in moodle 2.1
            #3 shows proposed "Analysis of (correct and incorrect) responses"

            Show
            Joseph Rézeau added a comment - #1 shows actual "Review of attempt" for one student, including the "Response History". #2 shows actual "Analysis of (correct) responses" in moodle 2.1 #3 shows proposed "Analysis of (correct and incorrect) responses"
            Joseph Rézeau made changes -
            Attachment screenshot-1-JR.jpg [ 26311 ]
            Hide
            Tim Hunt added a comment -

            I it is not 'analysis of correct responses', but rather analysis of final responses.

            And that analysis is currently buggy and should be fixed.

            For adaptive mode (and interactive mode) I agree that final ~= correct, but not quite.

            I think the right way to implement the new feature you want (which is a nice idea) is not by forking the existing code, but instead to make a new report plugin.

            Show
            Tim Hunt added a comment - I it is not 'analysis of correct responses', but rather analysis of final responses. And that analysis is currently buggy and should be fixed. For adaptive mode (and interactive mode) I agree that final ~= correct, but not quite. I think the right way to implement the new feature you want (which is a nice idea) is not by forking the existing code, but instead to make a new report plugin.
            Tim Hunt made changes -
            Testing Instructions 0. Run all unit tests in /question.

            1. Create a shortanswer question with right answer 'frog', and a numerical question with right answer 42. Also create a calculated and calculatedsimple questions with just a right answer.

            2. Add the questions to a quiz, and attempt it as two different students, one who gets it all right, and one who enters wrong answers for each question.

            3. As teacher, go to the quiz statistics report, and go into the detailed analysis for each question. Look at the response analysis, and verify that both the right and wrong answers appear.
            0. Run all unit tests in /question.

            1. Create the following questions:
            * a shortanswer question with right answer 'frog',
            * and a numerical question with right answer 42.
            * a calculated question with just a right answer.
            * a calculatedsimple question with just a right answer.
            * a numerical question with right answer 42, and multichoice units.

            2. Add the questions to a quiz, and attempt it as two different students, one who gets it all right, and one who enters wrong answers for each question.

            3. As teacher, go to the quiz statistics report, and go into the detailed analysis for each question. Look at the response analysis, and verify that both the right and wrong answers appear.
            Hide
            Tim Hunt added a comment -

            I think this fix, which comes with unit tests, is right. However, I would be really grateful if anyone could test or review it before I submit it for integration.

            Show
            Tim Hunt added a comment - I think this fix, which comes with unit tests, is right. However, I would be really grateful if anyone could test or review it before I submit it for integration.
            Tim Hunt made changes -
            Status Open [ 1 ] Waiting for peer review [ 10012 ]
            Pull Master Diff URL https://github.com/timhunt/moodle/compare/master...MDL-31065
            Pull Master Branch MDL-31065
            Pull from Repository git://github.com/timhunt/moodle.git
            Hide
            Tim Hunt added a comment -

            I just added some people as watchers who might plausibly be able to peer-review this.

            Show
            Tim Hunt added a comment - I just added some people as watchers who might plausibly be able to peer-review this.
            Hide
            Pierre Pichet added a comment -

            Given the number of lines changes, is next monday an acceptable delay ?

            Show
            Pierre Pichet added a comment - Given the number of lines changes, is next monday an acceptable delay ?
            Hide
            Tim Hunt added a comment -

            Sure, no worries. This has been a problem since 2.1 was released, and people have only just noticed, so it does not matter if we take a few more week to make sure the fix is correct.

            Show
            Tim Hunt added a comment - Sure, no worries. This has been a problem since 2.1 was released, and people have only just noticed, so it does not matter if we take a few more week to make sure the fix is correct.
            Hide
            Joseph Rézeau added a comment -

            Thanks for this, Tim. I've just tested your fix, and it does work as expected, but only on student's first attempt.

            If a quiz is set to allow more than one attempt, student responses from any further attempts are not shown in the statistics.

            I've just realized that this is not a problem with the current fix; we have the same bug in general: quiz statistics do not take into account student attempts other than the first one, regardless of Calculate statistics from being set to all attempts.

            This looks like another bug... Can you confirm my findings or am I misunderstanding something?

            Joseph

            Show
            Joseph Rézeau added a comment - Thanks for this, Tim. I've just tested your fix, and it does work as expected, but only on student's first attempt . If a quiz is set to allow more than one attempt, student responses from any further attempts are not shown in the statistics. I've just realized that this is not a problem with the current fix; we have the same bug in general: quiz statistics do not take into account student attempts other than the first one, regardless of Calculate statistics from being set to all attempts . This looks like another bug... Can you confirm my findings or am I misunderstanding something? Joseph
            Hide
            Tim Hunt added a comment -

            Look at the top of the front page of the statistics report. You will see an option about whether to analyse all attempts, or just the first attempt for each student. That option is working

            The reason for having that option is that the statistical calculations are based on various assumptions, and those assumptions are more likely to be valid for a student's first attempt, than when there are repeated attempts (where hopefully the student has learned from their first attempt, and so the different attempts are not independent.)

            Show
            Tim Hunt added a comment - Look at the top of the front page of the statistics report. You will see an option about whether to analyse all attempts, or just the first attempt for each student. That option is working The reason for having that option is that the statistical calculations are based on various assumptions, and those assumptions are more likely to be valid for a student's first attempt, than when there are repeated attempts (where hopefully the student has learned from their first attempt, and so the different attempts are not independent.)
            Hide
            Joseph Rézeau added a comment -

            @Tim,
            Well, I had seen that "all attempts" setting, but I had completely missed the

            Quiz information
            Last calculated 8 mins 34 secs ago there have been 2 attempts since then.

            and I had forgotten to press the Recalculate now button.

            So things do work as expected, both in general and in your current fix.

            I will now conduct some more tests - with PREG and REGEXP question types and report in this thread as soon as possible.

            Show
            Joseph Rézeau added a comment - @Tim, Well, I had seen that "all attempts" setting, but I had completely missed the Quiz information Last calculated 8 mins 34 secs ago there have been 2 attempts since then. and I had forgotten to press the Recalculate now button. So things do work as expected, both in general and in your current fix. I will now conduct some more tests - with PREG and REGEXP question types and report in this thread as soon as possible.
            Hide
            Tim Hunt added a comment -

            Those question types may need to be fixed too. If you look at the diff for this issue, you will see the kind of change that may be necessary.

            Show
            Tim Hunt added a comment - Those question types may need to be fixed too. If you look at the diff for this issue, you will see the kind of change that may be necessary.
            Hide
            Joseph Rézeau added a comment -

            @Tim
            Just tested PREG and REGEXP with your latest fix and quiz report works as expected. No need for a special fix for those question types.

            So everything is fine, with the exception of a minor terminology problem.

            The Model responses column can contain "expected student answers" with a grade of 100% or less, which is fine. But "expected student answers" can also mean "incorrect answers", with a grade of 0%. In that case, it seems weird to retain "model responses" as a heading for that column. Of course, there is no real ambiguity because the "Partial credit" column clearly shows that incorrect forecast answers got a 0% score. At the moment I cannot think of a more suitable heading for that column, so it can remain as is until someone comes with a better idea.

            As far as I'm concerned, I think this bug fix works as expected and would like to see it in core versions as soon as possible. But let's wait for more tests from different people first.

            Thanks, Tim,

            Joseph

            Show
            Joseph Rézeau added a comment - @Tim Just tested PREG and REGEXP with your latest fix and quiz report works as expected. No need for a special fix for those question types. So everything is fine, with the exception of a minor terminology problem. The Model responses column can contain "expected student answers" with a grade of 100% or less, which is fine. But "expected student answers" can also mean "incorrect answers", with a grade of 0%. In that case, it seems weird to retain "model responses" as a heading for that column. Of course, there is no real ambiguity because the "Partial credit" column clearly shows that incorrect forecast answers got a 0% score. At the moment I cannot think of a more suitable heading for that column, so it can remain as is until someone comes with a better idea. As far as I'm concerned, I think this bug fix works as expected and would like to see it in core versions as soon as possible. But let's wait for more tests from different people first. Thanks, Tim, Joseph
            Hide
            Tim Hunt added a comment -

            Thanks Joseph. Nice to see that regex inheriting from shortanswer really works.

            Regarding 'Model'. I agree with your comments. Not the perfect word, but hard to think of anything better. Anyway, that is a separate issue.

            Show
            Tim Hunt added a comment - Thanks Joseph. Nice to see that regex inheriting from shortanswer really works. Regarding 'Model'. I agree with your comments. Not the perfect word, but hard to think of anything better. Anyway, that is a separate issue.
            Mahmoud Kassaei made changes -
            Assignee Tim Hunt [ timhunt ] Mahmoud Kassaei [ mkassaei ]
            Tim Hunt made changes -
            Assignee Mahmoud Kassaei [ mkassaei ] Tim Hunt [ timhunt ]
            Mahmoud Kassaei made changes -
            Original Estimate 0 minutes [ 0 ]
            Remaining Estimate 0 minutes [ 0 ]
            Status Waiting for peer review [ 10012 ] Peer review in progress [ 10013 ]
            Peer reviewer mkassaei
            Hide
            Mahmoud Kassaei added a comment -

            The changes in the code look fine. I like the chnages of using a more generic static method, eg.:
            test_question_maker::make_question('shortanswer');
            instead of:
            test_question_maker::make_a_shortanswer_question();

            the following string exists in both (numerical and shortanswer qtype), a more generic language file would have avoided duplication in this case (sorry, I have been too pedantic).

            $string['didnotmatchanyanswer'] = '[Did not match any answer]';

            Show
            Mahmoud Kassaei added a comment - The changes in the code look fine. I like the chnages of using a more generic static method, eg.: test_question_maker::make_question('shortanswer'); instead of: test_question_maker::make_a_shortanswer_question(); the following string exists in both (numerical and shortanswer qtype), a more generic language file would have avoided duplication in this case (sorry, I have been too pedantic). $string ['didnotmatchanyanswer'] = ' [Did not match any answer] ';
            Hide
            Tim Hunt added a comment -

            Thanks Mahmoud. You are right. It is much better for that string to be shared between all question types, since we would like consistency here. I thought about it while I was doing the development, and reached the wrong conclusion. I have now moved that string to the main question.php lang file, and amended the commit.

            Show
            Tim Hunt added a comment - Thanks Mahmoud. You are right. It is much better for that string to be shared between all question types, since we would like consistency here. I thought about it while I was doing the development, and reached the wrong conclusion. I have now moved that string to the main question.php lang file, and amended the commit.
            Tim Hunt made changes -
            Status Peer review in progress [ 10013 ] Development in progress [ 3 ]
            Tim Hunt made changes -
            Status Development in progress [ 3 ] Waiting for integration review [ 10010 ]
            Fix Version/s 2.1.5 [ 11553 ]
            Fix Version/s 2.2.2 [ 11552 ]
            Fix Version/s 2.3 [ 10657 ]
            Fix Version/s STABLE backlog [ 10463 ]
            Pull 2.1 Branch MDL-31065_21
            Pull 2.2 Diff URL https://github.com/mkassaei/moodle/compare/MOODLE_22_STABLE...MDL-31065_22
            Pull 2.1 Diff URL https://github.com/timhunt/moodle/compare/MOODLE_21_STABLE...MDL-31065_21
            Pull 2.2 Branch MDL-31065_22
            Eloy Lafuente (stronk7) made changes -
            Currently in integration Yes [ 10041 ]
            Hide
            Pierre Pichet added a comment -

            Just to tell that I am beginning the tests and the question unit tests are OK.
            More later.

            Show
            Pierre Pichet added a comment - Just to tell that I am beginning the tests and the question unit tests are OK. More later.
            Hide
            Pierre Pichet added a comment -

            later...
            With 1 atttempt from the good boy (all good responses) and 1 attempt from the bad boy ( all bad response), I got the expected results
            Average grade of first attempts 50.00%
            Average grade of all attempts 50.00%
            Median grade (for first attempts) 50.00%
            Standard deviation (for first attempts) 70.71%

            This confirms as expected Joseph results.
            I just did the test on the master version (MDL-31065)
            Do you need tests on MDL-31065_21 and MDL-31065_22 ?

            Show
            Pierre Pichet added a comment - later... With 1 atttempt from the good boy (all good responses) and 1 attempt from the bad boy ( all bad response), I got the expected results Average grade of first attempts 50.00% Average grade of all attempts 50.00% Median grade (for first attempts) 50.00% Standard deviation (for first attempts) 70.71% This confirms as expected Joseph results. I just did the test on the master version ( MDL-31065 ) Do you need tests on MDL-31065 _21 and MDL-31065 _22 ?
            Eloy Lafuente (stronk7) made changes -
            Status Waiting for integration review [ 10010 ] Integration review in progress [ 10004 ]
            Integrator stronk7
            Hide
            Eloy Lafuente (stronk7) added a comment -

            (lost) but integrated, thanks!

            Show
            Eloy Lafuente (stronk7) added a comment - (lost) but integrated, thanks!
            Eloy Lafuente (stronk7) made changes -
            Status Integration review in progress [ 10004 ] Waiting for testing [ 10005 ]
            Affects Version/s 2.1.4 [ 11452 ]
            Affects Version/s 2.3 [ 10657 ]
            Affects Version/s 2.2 [ 10656 ]
            Fix Version/s 2.3 [ 10657 ]
            Rossiani Wijaya made changes -
            Tester rwijaya
            Rossiani Wijaya made changes -
            Status Waiting for testing [ 10005 ] Testing in progress [ 10011 ]
            Hide
            Rossiani Wijaya added a comment -

            This is working great.

            Thanks

            Test passed.

            Show
            Rossiani Wijaya added a comment - This is working great. Thanks Test passed.
            Rossiani Wijaya made changes -
            Status Testing in progress [ 10011 ] Tested [ 10006 ]
            Hide
            Eloy Lafuente (stronk7) added a comment -

            Your nice code represents only 1/46 of the issues that have been sent upstream this week, so thanks, but not many.

            Nah, joking, many thanks! Closing this a fixed, ciao

            Show
            Eloy Lafuente (stronk7) added a comment - Your nice code represents only 1/46 of the issues that have been sent upstream this week, so thanks, but not many. Nah, joking, many thanks! Closing this a fixed, ciao
            Eloy Lafuente (stronk7) made changes -
            Status Tested [ 10006 ] Closed [ 6 ]
            Resolution Fixed [ 1 ]
            Currently in integration Yes [ 10041 ]
            Integration date 27/Jan/12
            Tim Hunt made changes -
            Link This issue is duplicated by MDL-34323 [ MDL-34323 ]

              People

              • Votes:
                1 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: