Moodle
  1. Moodle
  2. MDL-28219

Adaptive mode always gives a penalty for checking an answer

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.1, 2.2
    • Fix Version/s: 2.1.3
    • Component/s: Questions, Quiz
    • Labels:
    • Testing Instructions:
      Hide
      • Create a quiz with "How questions behave: Adaptive mode" (otherwise, with standard settings)
      • Add two questions (both on the same page):
        • Q1: "Short Answer" question, question text: "What is the great answer?", answer text: "42" (100%)
        • Q2: "Numerical" question, question text: "What is 2 times 2", answer text: "4" (100%)
      • Log in as a student and attempt the quiz.
      • Enter the following answer in this sequence, click "Check" on the corresponding question after each entry.
        • What is the great answer? a
          VERIFY penalty message present: "This answer attracted a penalty of 0.33".
        • What is the great answer? 42
          VERIFY penalty message absent.
        • What is the great answer? b
          VERIFY penalty message absent.
        • What is 2 times 2? 3
          VERIFY penalty message present.
        • What is 2 times 2? 4
          VERIFY penalty message absent.
        • What is 2 times 2? 5
          VERIFY penalty message absent.

      Now proceed to next screen. Click "Submit all and Finish". Review the quiz as a teacher.

      VERIFY: The history of either of the two questions shows exactly 5 steps, with grades shown as:

      • empty
      • 0.00
      • 0.67
      • 0.67
      • 0.67
      Show
      Create a quiz with "How questions behave: Adaptive mode" (otherwise, with standard settings) Add two questions (both on the same page): Q1: "Short Answer" question, question text: "What is the great answer?", answer text: "42" (100%) Q2: "Numerical" question, question text: "What is 2 times 2", answer text: "4" (100%) Log in as a student and attempt the quiz. Enter the following answer in this sequence, click "Check" on the corresponding question after each entry. What is the great answer? a VERIFY penalty message present: "This answer attracted a penalty of 0.33". What is the great answer? 42 VERIFY penalty message absent . What is the great answer? b VERIFY penalty message absent . What is 2 times 2? 3 VERIFY penalty message present . What is 2 times 2? 4 VERIFY penalty message absent . What is 2 times 2? 5 VERIFY penalty message absent . Now proceed to next screen. Click "Submit all and Finish". Review the quiz as a teacher. VERIFY: The history of either of the two questions shows exactly 5 steps, with grades shown as: empty 0.00 0.67 0.67 0.67
    • Affected Branches:
      MOODLE_21_STABLE, MOODLE_22_STABLE
    • Fixed Branches:
      MOODLE_21_STABLE
    • Pull from Repository:
    • Pull Master Branch:
    • Rank:
      17835

      Description

      When taking a quiz in adaptive mode with a calculated or numerical question the check answer feature always attracts a penalty even if the the checked answer is correct and an incorrect answer is never entered.

      1. adaptive1.PNG
        18 kB
      2. adaptive11.PNG
        19 kB
      3. adaptive2.PNG
        4 kB
      4. adaptive3.PNG
        4 kB
      5. adaptive4.PNG
        4 kB
      6. adaptive5.PNG
        28 kB
      7. adaptive6.PNG
        37 kB
      8. adaptive7.PNG
        9 kB
      9. adaptive8.PNG
        43 kB
      10. adaptive9.PNG
        7 kB

        Issue Links

          Activity

          Hide
          Tim Hunt added a comment -
          Show
          Tim Hunt added a comment - More information in http://moodle.org/mod/forum/discuss.php?d=180620
          Hide
          Steve Turley added a comment -

          Tim,

          This is not just a display issue. I'm seeing the wrong scores also showing up in the grade book. It makes the adaptive mode totally unusable for me on numerical or calculated questions. All of my questions using adaptive mode are now broken. I will attach some screen shots. The reported scores are after the test student's first try with a correct answer. The penalty on the multiple choice question was 33% and the penalty on the numeric and calculated questions was set to 10%. Note the funny state it reports on the questions before final submission. It says the multiple choice question is correct but the other two are not complete. Perhaps that's a clue about a related error.

          The final screen shot is the response history for question 2. Note that it says the marks are 1.0 until the attempt is finished. At also records attempts as being saved, even when I wasn't checking this question, but rather another one.

          Show
          Steve Turley added a comment - Tim, This is not just a display issue. I'm seeing the wrong scores also showing up in the grade book. It makes the adaptive mode totally unusable for me on numerical or calculated questions. All of my questions using adaptive mode are now broken. I will attach some screen shots. The reported scores are after the test student's first try with a correct answer. The penalty on the multiple choice question was 33% and the penalty on the numeric and calculated questions was set to 10%. Note the funny state it reports on the questions before final submission. It says the multiple choice question is correct but the other two are not complete. Perhaps that's a clue about a related error. The final screen shot is the response history for question 2. Note that it says the marks are 1.0 until the attempt is finished. At also records attempts as being saved, even when I wasn't checking this question, but rather another one.
          Hide
          Scott N. Walck added a comment -

          I also have wrong scores showing up in my grade book. Every problem gets an extra 0.05/1.00 subtracted from it, regardless of whether it was correct, or incorrect a number of times.

          Show
          Scott N. Walck added a comment - I also have wrong scores showing up in my grade book. Every problem gets an extra 0.05/1.00 subtracted from it, regardless of whether it was correct, or incorrect a number of times.
          Hide
          Henning Bostelmann added a comment - - edited

          The problem is not restricted to the Calculated question type. Similar issues occur with other types, just that the incorrect behaviour is slightly different. For example, with the shortanswer type, the "penalty applied" message occurs as well; further, the student can re-submit the correct answer several times, and is penalized for these submissions.

          I have provided an attempt at a solution. However, the question engine code is complicated, and I'm not sure whether it will cause regressions. If anyone can verify the fix on their system, that would be most helpful.

          Also increased priority to "major" since the adaptive behaviour is completely broken, produces incorrect grades.

          Show
          Henning Bostelmann added a comment - - edited The problem is not restricted to the Calculated question type. Similar issues occur with other types, just that the incorrect behaviour is slightly different. For example, with the shortanswer type, the "penalty applied" message occurs as well; further, the student can re-submit the correct answer several times, and is penalized for these submissions. I have provided an attempt at a solution. However, the question engine code is complicated, and I'm not sure whether it will cause regressions. If anyone can verify the fix on their system, that would be most helpful. Also increased priority to "major" since the adaptive behaviour is completely broken, produces incorrect grades.
          Hide
          Henning Bostelmann added a comment -

          Tim: The fix is for master only, but should be an easy cherry-pick to 2.1.

          Show
          Henning Bostelmann added a comment - Tim: The fix is for master only, but should be an easy cherry-pick to 2.1.
          Hide
          Tim Hunt added a comment -

          This fix is not correct.

          Remember that the aim of adaptive mode in 2.1 is to work the same way as adaptive mode in 2.0 and before. In particular, in the past, after submitting a correct response, the student was still allowed to change their answer (in order to explore the feedback for the other responses).

          So, the third bullet under "Verify the following:" is wrong.

          Actually, I think that the first thing we need to do here is to get an install of 2.0 or 1.9, and step through some adaptive questions there making a record of what the behaviour we are trying to replicate is.

          Then, the next step should be to encode that expected behaviour in some more unit tests in question/behaviour/adaptive/simpletest/testwalkthrough.php.

          Only then should we start changing the actual behaviour code.


          One final note about the patch. Specifically the changes to question/behaviour/adaptive/behaviour.php. This is wrong.

          The student can carry on interacting with the question even after they have got it right, so we must not move to one of the finished states like gradedright until after Submit all and finish happens. We must stay in one of the three in-progress states 'todo', 'complete' or 'invalid'.

          The important distinction between todo and complete is captured by the following meanings:
          todo - the student still needs to do something to this question in this attempt.
          complete - the student no longer needs to worry about this question.

          I hope that you can now understand why the current logic is correct. Until the student has submitted a correct answer, they need to re-visit the question (time and inclination permitting) before submitting the quiz. Once they have got it right, they can stop worrying about it, but the question remains open for further changes.

          Show
          Tim Hunt added a comment - This fix is not correct. Remember that the aim of adaptive mode in 2.1 is to work the same way as adaptive mode in 2.0 and before. In particular, in the past, after submitting a correct response, the student was still allowed to change their answer (in order to explore the feedback for the other responses). So, the third bullet under "Verify the following:" is wrong. Actually, I think that the first thing we need to do here is to get an install of 2.0 or 1.9, and step through some adaptive questions there making a record of what the behaviour we are trying to replicate is. Then, the next step should be to encode that expected behaviour in some more unit tests in question/behaviour/adaptive/simpletest/testwalkthrough.php. Only then should we start changing the actual behaviour code. One final note about the patch. Specifically the changes to question/behaviour/adaptive/behaviour.php. This is wrong. The student can carry on interacting with the question even after they have got it right, so we must not move to one of the finished states like gradedright until after Submit all and finish happens. We must stay in one of the three in-progress states 'todo', 'complete' or 'invalid'. The important distinction between todo and complete is captured by the following meanings: todo - the student still needs to do something to this question in this attempt. complete - the student no longer needs to worry about this question. I hope that you can now understand why the current logic is correct. Until the student has submitted a correct answer, they need to re-visit the question (time and inclination permitting) before submitting the quiz. Once they have got it right, they can stop worrying about it, but the question remains open for further changes.
          Hide
          Henning Bostelmann added a comment -

          Tim, thanks for explaining the meaning of the states. They are, however, at least labelled incorrectly: 'todo' means (by your description) something like "In progress" but describes itself as "Not yet answered" in the language strings; 'complete' apparently means "Answer is complete" but describes itself as "Answer saved". These may be mere language string issues; I'm ignoring them for the moment.

          However, there are (at least) three functional issues remaining:

          • When in 'complete' state, and the user enters a wrong answer, the question goes back to 'todo' rather than remaining 'complete' (there's nothing "to do" as the user can no longer improve their score);
          • When in 'complete' state, a penalty is displayed, although the user has reached the right answer and can no longer improve;
          • Most importantly: When pressing "Submit all and finish", only the last answer is taken into account, not the best one. For example, if the user enters a correct answer, then a wrong one, the question is graded 0.

          I'll prepare a new patch soon.

          By the way, I still have a 1.9 test system with lots of data available, so comparing to the previous behaviour is not a problem; but I think it's clear even without this that the behaviour described above is not correct.

          Show
          Henning Bostelmann added a comment - Tim, thanks for explaining the meaning of the states. They are, however, at least labelled incorrectly: 'todo' means (by your description) something like "In progress" but describes itself as "Not yet answered" in the language strings; 'complete' apparently means "Answer is complete" but describes itself as "Answer saved". These may be mere language string issues; I'm ignoring them for the moment. However, there are (at least) three functional issues remaining: When in 'complete' state, and the user enters a wrong answer, the question goes back to 'todo' rather than remaining 'complete' (there's nothing "to do" as the user can no longer improve their score); When in 'complete' state, a penalty is displayed, although the user has reached the right answer and can no longer improve; Most importantly: When pressing "Submit all and finish", only the last answer is taken into account, not the best one. For example, if the user enters a correct answer, then a wrong one, the question is graded 0. I'll prepare a new patch soon. By the way, I still have a 1.9 test system with lots of data available, so comparing to the previous behaviour is not a problem; but I think it's clear even without this that the behaviour described above is not correct.
          Hide
          Tim Hunt added a comment -
          • Actually, the text displayed to users to describe the current state is controlled by the behaviour via the get_state_string method. Adaptive behaviour overrides that method. That deals with most of the states a question can be in, but your are right that what it displays for 'wrong after right' could be improved.
          • Displaying the penalty when the current answer is right is pretty much what this bug is about. However, note that Moodle still internally adds a penalty in this case (it is just never relevant, because as you say, the student can no longer improve there score) However, watch the raw_grade, grade, penalty, sumpenalty columns from http://docs.moodle.org/dev/Question_database_structure#Following_what_happens_when_questions_are_attempted to learn how 1.9 really worked.
          • What do you mean by 'only the last answer is taken into account'. If you mean that the latest response the student gave is what is displayed on the review page, then again that is how 1.9 worked. However, the process_finish code does seem to correctly set the fraction - or at least it tries to.
          Show
          Tim Hunt added a comment - Actually, the text displayed to users to describe the current state is controlled by the behaviour via the get_state_string method. Adaptive behaviour overrides that method. That deals with most of the states a question can be in, but your are right that what it displays for 'wrong after right' could be improved. Displaying the penalty when the current answer is right is pretty much what this bug is about. However, note that Moodle still internally adds a penalty in this case (it is just never relevant, because as you say, the student can no longer improve there score) However, watch the raw_grade, grade, penalty, sumpenalty columns from http://docs.moodle.org/dev/Question_database_structure#Following_what_happens_when_questions_are_attempted to learn how 1.9 really worked. What do you mean by 'only the last answer is taken into account'. If you mean that the latest response the student gave is what is displayed on the review page, then again that is how 1.9 worked. However, the process_finish code does seem to correctly set the fraction - or at least it tries to.
          Hide
          Henning Bostelmann added a comment -

          @1: Not when viewing the "response history" later (as a teacher) - you'll always see "Not yet answered" or "Answer saved" here (possibly in other places too).

          @2: I'm aware how it works internally, and the internal behaviour is fine, just the display is wrong. (At this point, it's really a display problem.)

          @3: It tries, but fails. Try setting up a short-answer question, and entering a correct then an incorrect answer. Your score will be displayed as 1 until you "Submit all and finish"; but from there on, and in the gradebook, it will be 0. That's likely the effect described by Scott and Steve above ("penalty is always applied in the gradebook").

          For details: In qbehaviour_adaptive::process_finish(), the call to $pendingstep->get_fraction() will always return null. $this->qa->get_fraction() should be used instead. [Note that similar code works in process_submit() since process_save() is called first.] As a related issue, invalid attempts need to be handled differently in process_finish() - at the moment, if your last answer is invalid, you will receive 0 for the overall question.

          Show
          Henning Bostelmann added a comment - @1: Not when viewing the "response history" later (as a teacher) - you'll always see "Not yet answered" or "Answer saved" here (possibly in other places too). @2: I'm aware how it works internally, and the internal behaviour is fine, just the display is wrong. (At this point, it's really a display problem.) @3: It tries, but fails. Try setting up a short-answer question, and entering a correct then an incorrect answer. Your score will be displayed as 1 until you "Submit all and finish"; but from there on, and in the gradebook, it will be 0. That's likely the effect described by Scott and Steve above ("penalty is always applied in the gradebook"). For details: In qbehaviour_adaptive::process_finish(), the call to $pendingstep->get_fraction() will always return null. $this->qa->get_fraction() should be used instead. [Note that similar code works in process_submit() since process_save() is called first.] As a related issue, invalid attempts need to be handled differently in process_finish() - at the moment, if your last answer is invalid, you will receive 0 for the overall question.
          Hide
          Tim Hunt added a comment -

          1. Response history is MDL-28679.

          2. We agree.

          3. OK, so bug. Sounds like you understand the issue better than me, so looking forwards to seeing the patch.

          Thanks. One of these days I will have to buy you a beer.

          Show
          Tim Hunt added a comment - 1. Response history is MDL-28679 . 2. We agree. 3. OK, so bug. Sounds like you understand the issue better than me, so looking forwards to seeing the patch. Thanks. One of these days I will have to buy you a beer.
          Hide
          Tim Hunt added a comment -

          1. And I have just done a fix for MDL-28679.

          Show
          Tim Hunt added a comment - 1. And I have just done a fix for MDL-28679 .
          Hide
          Henning Bostelmann added a comment -

          Hi Tim

          Here's the new fix. Can you have another look?

          It's really a bundle of several related issues that have been fixed; only one of them was specific to the numerical qtype. I have tried to separate them out into independent commits. Unit tests are amended as well.

          There's one smaller issue remaining: When submitting a correct answer, then an incorrect one, the question now no longer shows a penalty (as discussed) but shows "With previous penalties this gives..." It should probably read "With previous answers this gives..." However, fixing this would likely need a new language string, and I'm not sure whether you would want to backport this to 2.1, so I'm leaving it out for the moment.

          I will be away from Monday on for two weeks and a bit, and might be responding somewhat slower to queries.

          Show
          Henning Bostelmann added a comment - Hi Tim Here's the new fix. Can you have another look? It's really a bundle of several related issues that have been fixed; only one of them was specific to the numerical qtype. I have tried to separate them out into independent commits. Unit tests are amended as well. There's one smaller issue remaining: When submitting a correct answer, then an incorrect one, the question now no longer shows a penalty (as discussed) but shows "With previous penalties this gives..." It should probably read "With previous answers this gives..." However, fixing this would likely need a new language string, and I'm not sure whether you would want to backport this to 2.1, so I'm leaving it out for the moment. I will be away from Monday on for two weeks and a bit, and might be responding somewhat slower to queries.
          Hide
          Tim Hunt added a comment -

          Sorry I did not have time to look at this over the weekend. I hope you have a good holiday.

          Thank you very much for implementing the changes in such a clear sequence of commits. That made it much easier to review.

          I am afraid this is still not right. Some code-review comments:

          1. is_improvable() is definitely not an unambiguous property of state. For example, before seeing the whole patch, I was pretty sure that gradedwrong and gradedpartial would be improvable. Instead your definition of improvable is specific to the adaptive behaviour, so the is_imporvable method needs to live there, taking the $state as an argument. Other than that, https://github.com/bostelm/moodle/commit/43f9a1ee4b3b3928615d31cce052a06e0e49af68 looks good.

          2. https://github.com/bostelm/moodle/commit/f0f300017a1a3919044c357bba34165aae9cf751 looks good, except that I would leave the original line of code $prevtries = $this->qa->get_last_behaviour_var('_try', 0); rather than the 4 lines you have. I know that is a small performance loss, but it is negligible, so I think code readbility wins.

          3. Would be nice to have unit tests for get_last_step_with_behaviour_var, but then it seems that I never wrote any for get_last_step_with_qt_var so I can't talk.

          4. One really pedantic point about whitespace, since I noticed it. In situations like process_finish, I would always put the blank line before the else, rather than after it, because the else belongs wiht the code that follows, not the code that comes before. It is like you should always have more whitespace before a header than after it. However, this may just be my personal preference.

          5. Other than that, https://github.com/bostelm/moodle/commit/6dc3c3f62927fe24f8b8cfe85d1875eedfac1582 looks fine. The patch is hard to understand because of the necessary changed indent, but I got there eventually.

          6. https://github.com/bostelm/moodle/commit/98cc6bc2a12fb483c58b75ec13d9e0eec29df99d is good.

          I still need to review the new unit tests.

          I worry a bit that the testing instructions above use exactly the same sequence of interactions for both questions. I would like to see tests for

          • right first time,
          • submit the same wrong response twice successively, what should that do to the penalty?
          • To submit all and finish with, and without, having done check on the individual question, and with final answer wrong/right.

          Perhaps some of this is covered in the unit tests? Or perhaps I should write the extra tests I want to see?

          Show
          Tim Hunt added a comment - Sorry I did not have time to look at this over the weekend. I hope you have a good holiday. Thank you very much for implementing the changes in such a clear sequence of commits. That made it much easier to review. I am afraid this is still not right. Some code-review comments: 1. is_improvable() is definitely not an unambiguous property of state. For example, before seeing the whole patch, I was pretty sure that gradedwrong and gradedpartial would be improvable. Instead your definition of improvable is specific to the adaptive behaviour, so the is_imporvable method needs to live there, taking the $state as an argument. Other than that, https://github.com/bostelm/moodle/commit/43f9a1ee4b3b3928615d31cce052a06e0e49af68 looks good. 2. https://github.com/bostelm/moodle/commit/f0f300017a1a3919044c357bba34165aae9cf751 looks good, except that I would leave the original line of code $prevtries = $this->qa->get_last_behaviour_var('_try', 0); rather than the 4 lines you have. I know that is a small performance loss, but it is negligible, so I think code readbility wins. 3. Would be nice to have unit tests for get_last_step_with_behaviour_var, but then it seems that I never wrote any for get_last_step_with_qt_var so I can't talk. 4. One really pedantic point about whitespace, since I noticed it. In situations like process_finish, I would always put the blank line before the else, rather than after it, because the else belongs wiht the code that follows, not the code that comes before. It is like you should always have more whitespace before a header than after it. However, this may just be my personal preference. 5. Other than that, https://github.com/bostelm/moodle/commit/6dc3c3f62927fe24f8b8cfe85d1875eedfac1582 looks fine. The patch is hard to understand because of the necessary changed indent, but I got there eventually. 6. https://github.com/bostelm/moodle/commit/98cc6bc2a12fb483c58b75ec13d9e0eec29df99d is good. I still need to review the new unit tests. I worry a bit that the testing instructions above use exactly the same sequence of interactions for both questions. I would like to see tests for right first time, submit the same wrong response twice successively, what should that do to the penalty? To submit all and finish with, and without, having done check on the individual question, and with final answer wrong/right. Perhaps some of this is covered in the unit tests? Or perhaps I should write the extra tests I want to see?
          Hide
          Tim Hunt added a comment -

          OK, so as above, there is a little more work to do.

          Comments on the unit tests:

          7. It would improve the readability to make helper functions like $this->get_does_not_contain_penalty_info_expectation(); $this->get__contains_penalty_info_expectation(...), but I am not expecting you to do this.

          8. In the second and third hunks of https://github.com/bostelm/moodle/commit/3e51f8dd5df6822abef6dd83d8356925f2c11d2d, why do you use 0.5 instead of 1? The code works, I just don't understand why you did it like that.

          9. I like you choice of 'wrong' animals.

          10. With test_adaptive_shortanswer_invalid_after_complete, would it be more interesting to submit all and finish with the invalid answer still there?

          11. Would be nice if more of your new tests used the get_does_not_contain_penalty_info_expectation / get_contains_penalty_info_expectation methods (or equivalent) to verify the correct behaviour.

          Anyway, overall excellent work. Just a few details to fix and then this can be integrated. Remember that it would be better to fix MDL-29408 in this same branch.

          Show
          Tim Hunt added a comment - OK, so as above, there is a little more work to do. Comments on the unit tests: 7. It would improve the readability to make helper functions like $this->get_does_not_contain_penalty_info_expectation(); $this->get__contains_penalty_info_expectation(...), but I am not expecting you to do this. 8. In the second and third hunks of https://github.com/bostelm/moodle/commit/3e51f8dd5df6822abef6dd83d8356925f2c11d2d , why do you use 0.5 instead of 1? The code works, I just don't understand why you did it like that. 9. I like you choice of 'wrong' animals. 10. With test_adaptive_shortanswer_invalid_after_complete, would it be more interesting to submit all and finish with the invalid answer still there? 11. Would be nice if more of your new tests used the get_does_not_contain_penalty_info_expectation / get_contains_penalty_info_expectation methods (or equivalent) to verify the correct behaviour. Anyway, overall excellent work. Just a few details to fix and then this can be integrated. Remember that it would be better to fix MDL-29408 in this same branch.
          Hide
          Tim Hunt added a comment -

          Note to people watching this bug: although there are a few minor changes I am asking Henning to make, I think that overall the logic of the code he has given is correct.

          So, it would be very helpful if you are able to apply his changes to your Moodle and test to verify they work for you while Henning is on holiday for the next two weeks. That should ensure that as soon as he gets back, we can submit this for integration with confidence. Thanks.

          Show
          Tim Hunt added a comment - Note to people watching this bug: although there are a few minor changes I am asking Henning to make, I think that overall the logic of the code he has given is correct. So, it would be very helpful if you are able to apply his changes to your Moodle and test to verify they work for you while Henning is on holiday for the next two weeks. That should ensure that as soon as he gets back, we can submit this for integration with confidence. Thanks.
          Hide
          Joseph Rézeau added a comment -

          Just tested Henning's patch from https://github.com/bostelm/moodle/commits/MDL-28219
          Everything looks quite OK.
          Good job!
          Joseph

          Show
          Joseph Rézeau added a comment - Just tested Henning's patch from https://github.com/bostelm/moodle/commits/MDL-28219 Everything looks quite OK. Good job! Joseph
          Hide
          Michael J Litzkow added a comment -

          It would really be great if we could get this fix released soon. I have a professor who is anxiously waiting for it.
          – mike

          Show
          Michael J Litzkow added a comment - It would really be great if we could get this fix released soon. I have a professor who is anxiously waiting for it. – mike
          Hide
          Matt Petro added a comment -

          The patches are working for us too. Thanks!

          Show
          Matt Petro added a comment - The patches are working for us too. Thanks!
          Hide
          Henning Bostelmann added a comment -

          Thanks for reviewing this again, Tim. I hope to get to this in a couple of days - I'm currently being held up by some other tasks.

          Show
          Henning Bostelmann added a comment - Thanks for reviewing this again, Tim. I hope to get to this in a couple of days - I'm currently being held up by some other tasks.
          Hide
          Tim Hunt added a comment -

          No worries.

          Nice to know that you are back. I hope you had a good holiday.

          Show
          Tim Hunt added a comment - No worries. Nice to know that you are back. I hope you had a good holiday.
          Hide
          Henning Bostelmann added a comment -

          Hi Tim, here's the corrected patch (note the new branch name).

          @1 Method is_improvable moved to qbehaviour_adaptive_renderer::is_state_improvable

          @2 Changed as suggested. [The _really_ elegant solution would be to add a default value parameter to question_attempt_step::get_behaviour_var(), but that's outside the scope of this patch.]

          @3 Well, I'll keep it in line with the current code, then

          @4 Changed as suggested. [By the way, whitespace conventions seem to be a frequent matter of confusion here, in particular in integration review. Are they agreed and written down somewhere?]

          @5, @6 OK

          @7 Good idea - I've added the methods, and added calls to them throughout.

          @8 I thought it would be a better test for the regrading functionality to use different numerical values from before. But actually it's inconsistent - at least one of the answers would be expected to have a value of 1, I think. I changed it to 1.

          @9

          @10 Here I'm testing whether, after the invalid answer, the behaviour returns to state $complete (rather than $todo), with no penalty displayed - this is a special case in the code.

          @11 Added above.

          MDL-29408 is now fixed in this branch.

          Good point about submitting the same wrong answer twice in a row - in this case, as in Moodle 1.9, the second incorrect answer should be ignored and the penalty counted only once. This needed another fix in the code, and an amendment to the unit tests; this is included in two extra commits.

          About the testing instructions: These are only meant to check that the reported bug is now gone, in the numerical and shortanswer qtypes. They're not a full functional test of the adaptive question behaviour (which would be rather lengthy). The unit tests cover a good deal of different situations, among these:

          • two right answers in test_adaptive_multichoice2(),
          • wrong-right-wrong sequence in test_adaptive_shortanswer_wrong_right_wrong(), now amended to include duplicate submission of the same wrong answer,
          • empty/invalid answer in the sequence, in test_adaptive_shortanswer_invalid_after_complete(),
          • saving the answer without submitting it, in test_adaptive_multichoice()

          Of course, the unit tests could always be extended.

          Show
          Henning Bostelmann added a comment - Hi Tim, here's the corrected patch (note the new branch name). @1 Method is_improvable moved to qbehaviour_adaptive_renderer::is_state_improvable @2 Changed as suggested. [The _really_ elegant solution would be to add a default value parameter to question_attempt_step::get_behaviour_var(), but that's outside the scope of this patch.] @3 Well, I'll keep it in line with the current code, then @4 Changed as suggested. [By the way, whitespace conventions seem to be a frequent matter of confusion here, in particular in integration review. Are they agreed and written down somewhere?] @5, @6 OK @7 Good idea - I've added the methods, and added calls to them throughout. @8 I thought it would be a better test for the regrading functionality to use different numerical values from before. But actually it's inconsistent - at least one of the answers would be expected to have a value of 1, I think. I changed it to 1. @9 @10 Here I'm testing whether, after the invalid answer, the behaviour returns to state $complete (rather than $todo), with no penalty displayed - this is a special case in the code. @11 Added above. MDL-29408 is now fixed in this branch. Good point about submitting the same wrong answer twice in a row - in this case, as in Moodle 1.9, the second incorrect answer should be ignored and the penalty counted only once. This needed another fix in the code, and an amendment to the unit tests; this is included in two extra commits. About the testing instructions: These are only meant to check that the reported bug is now gone, in the numerical and shortanswer qtypes. They're not a full functional test of the adaptive question behaviour (which would be rather lengthy). The unit tests cover a good deal of different situations, among these: two right answers in test_adaptive_multichoice2(), wrong-right-wrong sequence in test_adaptive_shortanswer_wrong_right_wrong(), now amended to include duplicate submission of the same wrong answer, empty/invalid answer in the sequence, in test_adaptive_shortanswer_invalid_after_complete(), saving the answer without submitting it, in test_adaptive_multichoice() Of course, the unit tests could always be extended.
          Hide
          Tim Hunt added a comment -

          Sounds good from the comment. I have an exam tomorrow morning, but will try to get to this after that.

          Show
          Tim Hunt added a comment - Sounds good from the comment. I have an exam tomorrow morning, but will try to get to this after that.
          Hide
          Tim Hunt added a comment -

          Sorry, still one niggle.

          1. is_state_improvable is surely logic, not display, code. Can we easily move that to the behaviour class?

          Actually, so should the existing get_graded_step.

          12. Should get_graded_step use get_last_step_with_behaviour_var?

          OK, three new commits here. What do you think? I think this is nearly ready for integration.

          Show
          Tim Hunt added a comment - Sorry, still one niggle. 1. is_state_improvable is surely logic, not display, code. Can we easily move that to the behaviour class? Actually, so should the existing get_graded_step. 12. Should get_graded_step use get_last_step_with_behaviour_var? OK, three new commits here. What do you think? I think this is nearly ready for integration.
          Hide
          Tim Hunt added a comment -

          What I mean is, three new commits at https://github.com/timhunt/moodle/commits/MDL-28219c

          Show
          Tim Hunt added a comment - What I mean is, three new commits at https://github.com/timhunt/moodle/commits/MDL-28219c
          Hide
          Joseph Rézeau added a comment -

          Hi Tim, hope your exam went well this morning.
          Joseph

          Show
          Joseph Rézeau added a comment - Hi Tim, hope your exam went well this morning. Joseph
          Hide
          Henning Bostelmann added a comment -

          Hi Tim, I agree with your changes. In fact, $qa->get_behaviour() is the function call I had been looking for, without success...

          I've pulled your changes to my branches (master and 2.1). Seems we're ready for integration then?

          Show
          Henning Bostelmann added a comment - Hi Tim, I agree with your changes. In fact, $qa->get_behaviour() is the function call I had been looking for, without success... I've pulled your changes to my branches (master and 2.1). Seems we're ready for integration then?
          Hide
          Tim Hunt added a comment -

          Just one final thing. I think we should squash all these changes down to a single commit to fix the bug. Do you want to do that (using git rebase -i). Actually, you could wait until this week's weekly build is out, and then rebase on top of the latest branches.

          But I agree, I want to get this submitted for integration this week, so it goes in to next week's builds. Thanks for sticking with this.

          Show
          Tim Hunt added a comment - Just one final thing. I think we should squash all these changes down to a single commit to fix the bug. Do you want to do that (using git rebase -i). Actually, you could wait until this week's weekly build is out, and then rebase on top of the latest branches. But I agree, I want to get this submitted for integration this week, so it goes in to next week's builds. Thanks for sticking with this.
          Hide
          Henning Bostelmann added a comment -

          All done now - squashed and rebased. Not sure though if the integrator prefers the huge single commit; just in case, I have kept the old branches with individual commits - see https://github.com/bostelm/moodle/compare/master...MDL-28219c if needed.

          Show
          Henning Bostelmann added a comment - All done now - squashed and rebased. Not sure though if the integrator prefers the huge single commit; just in case, I have kept the old branches with individual commits - see https://github.com/bostelm/moodle/compare/master...MDL-28219c if needed.
          Hide
          Tim Hunt added a comment -

          Thanks Henning.

          Show
          Tim Hunt added a comment - Thanks Henning.
          Hide
          Joseph Rézeau added a comment -

          Just switched my local 2.1 moodle test site to https://github.com/bostelm/moodle/tree/MDL-28219d-21 to test out the latest state of this bug on my local 2.1 test site.

          Everything works as expected, except for a scenario which had not been originally envisaged and is described below.

          • Create a question "Short Answer" question, question text: "What is the great answer?",
            • answer 1 text: "42" (100%)
            • answer 2 text: "forty-two" (100%)
            • answer 3 text: "fortytwo" (50%)
          • Preview question in Preview question window. use Adaptive mode
            • Enter answer: "42" and Check.
            • Check that you get "Correct / Marks for this submission: 10.0/10.0."
            • Enter answer "forty-two" and Check.
            • Check that you get "Correct / Marks for this submission: 10.0/10.0."
            • Enter answer "fortytwo"
            • You get: "Partially correct / Marks for this submission: 5.0/10.0. With previous penalties this gives 10.0/10.0."
            • Enter answer "dunno"
            • You get: "Incorrect / Marks for this submission: 0.0/10.0. With previous penalties this gives 10.0/10.0."

          Why would we do this? Well, in Adaptive mode, I always encouraged my students to not be content with ONE correct answer, but to try and find as many correct (or partially correct) answers as possible. Of course, once the correct answer (or one of the correct answers if more than one) has been given, the score should remain untouched, be it with or without penalties.

          But further submissions (which I would prefer to call "answers") from the student should only display the message:

          "Correct" or "Partially correct" or "Incorrect" (according to the case)
          and

          "Marks for this answer: 10/10 (or 5/10 etc. according to the case).
          but the "With previous penalties this gives 10.0/10.0." should NOT be displayed, as it is useless and potentially disturbing.

          Show
          Joseph Rézeau added a comment - Just switched my local 2.1 moodle test site to https://github.com/bostelm/moodle/tree/MDL-28219d-21 to test out the latest state of this bug on my local 2.1 test site. Everything works as expected, except for a scenario which had not been originally envisaged and is described below. Create a question "Short Answer" question, question text: "What is the great answer?", answer 1 text: "42" (100%) answer 2 text: "forty-two" (100%) answer 3 text: "fortytwo" (50%) Preview question in Preview question window. use Adaptive mode Enter answer: "42" and Check. Check that you get "Correct / Marks for this submission: 10.0/10.0." Enter answer "forty-two" and Check. Check that you get "Correct / Marks for this submission: 10.0/10.0." Enter answer "fortytwo" You get: "Partially correct / Marks for this submission: 5.0/10.0. With previous penalties this gives 10.0/10.0." Enter answer "dunno" You get: "Incorrect / Marks for this submission: 0.0/10.0. With previous penalties this gives 10.0/10.0." Why would we do this? Well, in Adaptive mode, I always encouraged my students to not be content with ONE correct answer, but to try and find as many correct (or partially correct) answers as possible. Of course, once the correct answer (or one of the correct answers if more than one) has been given, the score should remain untouched, be it with or without penalties. But further submissions (which I would prefer to call "answers") from the student should only display the message: "Correct" or "Partially correct" or "Incorrect" (according to the case) and "Marks for this answer : 10/10 (or 5/10 etc. according to the case). but the "With previous penalties this gives 10.0/10.0." should NOT be displayed, as it is useless and potentially disturbing.
          Hide
          Tim Hunt added a comment -

          Actually, on the vocabulary point: I try to consistently use:

          1. Response for what the student entered.

          2. Answer for what the teacher entered (e.g. in Multichoice or short-answer, although in the UI, the teacher answers are actually labelled choice 1, choice 2, etc. I think that is helpful, rather than confusing.

          3. For multi-stage behaviours, we use the work try, as in "3 tries remaining" in interactive mode.

          4. And correspondingly quizzes have multiple attempts.

          See also http://docs.moodle.org/dev/Overview_of_the_Moodle_question_engine#A_note_about_scores.

          So, I think it should say "marks for this response" or possibly "marks for this try". (Marks is correct. I don't know how to choose between response or try, it is a matter of where you want to put the emphasis.

          I would be inclined to change "With previous penalites ..." to "Accounting for previous tries ..."

          Note that this bug already contains reviewed code that is ready to integrate, so I am wondering whether we should move further changes into a separate issue?

          Show
          Tim Hunt added a comment - Actually, on the vocabulary point: I try to consistently use: 1. Response for what the student entered. 2. Answer for what the teacher entered (e.g. in Multichoice or short-answer, although in the UI, the teacher answers are actually labelled choice 1, choice 2, etc. I think that is helpful, rather than confusing. 3. For multi-stage behaviours, we use the work try, as in "3 tries remaining" in interactive mode. 4. And correspondingly quizzes have multiple attempts. See also http://docs.moodle.org/dev/Overview_of_the_Moodle_question_engine#A_note_about_scores . So, I think it should say "marks for this response" or possibly "marks for this try". (Marks is correct. I don't know how to choose between response or try, it is a matter of where you want to put the emphasis. I would be inclined to change "With previous penalites ..." to "Accounting for previous tries ..." Note that this bug already contains reviewed code that is ready to integrate, so I am wondering whether we should move further changes into a separate issue?
          Hide
          Joseph Rézeau added a comment -

          @Tim, thanks for your comment.
          The vocabulary (aka terminology) is a difficult problem, because the words used tend to change their meaning according to the context. But I agree of course with consistency, and I agree with sticking to "response" for what the student enters. Note: we do have a problem in French were there is only one word "réponse" corresponding to both "answer" and "response", but that is another matter...

          I would prefer "Marks for this response", then.

          I would be inclined to change "With previous penalites ..." to "Accounting for previous tries ..."

          Yes, why not? But does this mean that you agree or disagree that the "penalties, etc" message should not be displayed upon further tries at a question which has already been answered as Correct?

          And yes, could you please move this to a new issue.

          Joseph

          Show
          Joseph Rézeau added a comment - @Tim, thanks for your comment. The vocabulary (aka terminology) is a difficult problem, because the words used tend to change their meaning according to the context. But I agree of course with consistency, and I agree with sticking to "response" for what the student enters. Note: we do have a problem in French were there is only one word "réponse" corresponding to both "answer" and "response", but that is another matter... I would prefer "Marks for this response ", then. I would be inclined to change "With previous penalites ..." to "Accounting for previous tries ..." Yes, why not? But does this mean that you agree or disagree that the "penalties, etc" message should not be displayed upon further tries at a question which has already been answered as Correct? And yes, could you please move this to a new issue. Joseph
          Hide
          Joseph Rézeau added a comment - - edited

          HANG ON PLEASE!

          Just found out that the fix proposed by Henning is now preventing my adaptivewithhelp behaviour from working properly...

          Adaptivewithhelp is based on adaptive behaviour and is needed for my regexp question type (and, maybe, by Oleg's preg question type too).

          It is working fine with the current state of adaptive behaviour, but not with Henning's fix.

          what should we do?

          EDIT.- The problem I am encountering is probably due to the introduction of is_state_improvable, but I'll have to investigate...
          By "not working properly" I mean that the "Help me" submit button does add a letter to the student's response, but once the Check button has been pressed, it refuses to add a letter unless the student adds something new and submits it... This is not what I intend Adaptivewithhelp to do.

          Show
          Joseph Rézeau added a comment - - edited HANG ON PLEASE! Just found out that the fix proposed by Henning is now preventing my adaptivewithhelp behaviour from working properly... Adaptivewithhelp is based on adaptive behaviour and is needed for my regexp question type (and, maybe, by Oleg's preg question type too). It is working fine with the current state of adaptive behaviour, but not with Henning's fix. what should we do? EDIT.- The problem I am encountering is probably due to the introduction of is_state_improvable, but I'll have to investigate... By "not working properly" I mean that the "Help me" submit button does add a letter to the student's response, but once the Check button has been pressed, it refuses to add a letter unless the student adds something new and submits it... This is not what I intend Adaptivewithhelp to do.
          Hide
          Joseph Rézeau added a comment -

          EDIT2.- Got it! It's this

                  if ($this->question->is_same_response($response, $prevresponse)) {
                      return question_attempt::DISCARD;
                  }

          which ruins my Adaptivewithhelp behaviour

          I guess if Henning's fix is integrated I'll have to either find a workaround in my Adaptivewithhelp behaviour OR simply no longer base it on Adaptive.

          Show
          Joseph Rézeau added a comment - EDIT2.- Got it! It's this if ($ this ->question->is_same_response($response, $prevresponse)) { return question_attempt::DISCARD; } which ruins my Adaptivewithhelp behaviour I guess if Henning's fix is integrated I'll have to either find a workaround in my Adaptivewithhelp behaviour OR simply no longer base it on Adaptive.
          Hide
          Joseph Rézeau added a comment -

          EDIT3 - I think I've found the solution:

          • copy public function process_submit(question_attempt_pending_step $pendingstep) { ... }

            from adaptive/behaviour.php to adaptivewithhelp/behaviour.php

          • change
                    if ($this->question->is_same_response($response, $prevresponse)) {
                        return question_attempt::DISCARD;
                    }
            

            to

                    if ($this->question->is_same_response($response, $prevresponse) && !$pendingstep->has_behaviour_var('helpme') ) {
                        return question_attempt::DISCARD;
                    }
            
            

          So far it seems to work, so I give my go ahead to Henning's fix as fas as I'm concerned!

          Joseph

          Show
          Joseph Rézeau added a comment - EDIT3 - I think I've found the solution: copy public function process_submit(question_attempt_pending_step $pendingstep) { ... } from adaptive/behaviour.php to adaptivewithhelp/behaviour.php change if ($ this ->question->is_same_response($response, $prevresponse)) { return question_attempt::DISCARD; } to if ($ this ->question->is_same_response($response, $prevresponse) && !$pendingstep->has_behaviour_var('helpme') ) { return question_attempt::DISCARD; } So far it seems to work, so I give my go ahead to Henning's fix as fas as I'm concerned! Joseph
          Hide
          Eloy Lafuente (stronk7) added a comment -

          Reopening as of last feedback from Joseph, thanks!

          Show
          Eloy Lafuente (stronk7) added a comment - Reopening as of last feedback from Joseph, thanks!
          Hide
          Henning Bostelmann added a comment -

          Eloy, I'm a bit confused. The last feedback from Joseph suggested a modification to his own, contributed code (outside core). In which way should we improve the present patch to accommodate this?

          Show
          Henning Bostelmann added a comment - Eloy, I'm a bit confused. The last feedback from Joseph suggested a modification to his own, contributed code (outside core). In which way should we improve the present patch to accommodate this?
          Hide
          Tim Hunt added a comment -

          Henning is right, and Joseph is just confusing the issue with comments about his third-party code.

          This patch is a bit win. Much better than what we currently have, even if it is not perfect. Please review, and hopefully integrate this. If anything else needs doing, we can tackle that in separate issues later.

          Show
          Tim Hunt added a comment - Henning is right, and Joseph is just confusing the issue with comments about his third-party code. This patch is a bit win. Much better than what we currently have, even if it is not perfect. Please review, and hopefully integrate this. If anything else needs doing, we can tackle that in separate issues later.
          Hide
          Eloy Lafuente (stronk7) added a comment -

          Apologizes by the confusion. Reviewing...

          Show
          Eloy Lafuente (stronk7) added a comment - Apologizes by the confusion. Reviewing...
          Hide
          Eloy Lafuente (stronk7) added a comment -

          Nothing to argue against so... integrating, thanks! (and sorry for miss-reading Joseph comments).

          Show
          Eloy Lafuente (stronk7) added a comment - Nothing to argue against so... integrating, thanks! (and sorry for miss-reading Joseph comments).
          Hide
          Eloy Lafuente (stronk7) added a comment -

          Just 2 tiny and unrelated comments...

          1) I'm seeing lately a lot of 1-line comments ending with . (dot). I don't think we should use them at all.

          2) It would be great to review a bit the code-coverage definitions for /question tests. Right now it only "covers" a few files and it's really far from reality. Surely each subplugin (qtype, behavior... ) should define its own dir to be included and, as last resort, if it's difficult due to cross-interdependencies, surely it's better to have the whole /question dir included.

          Ciao

          Show
          Eloy Lafuente (stronk7) added a comment - Just 2 tiny and unrelated comments... 1) I'm seeing lately a lot of 1-line comments ending with . (dot). I don't think we should use them at all. 2) It would be great to review a bit the code-coverage definitions for /question tests. Right now it only "covers" a few files and it's really far from reality. Surely each subplugin (qtype, behavior... ) should define its own dir to be included and, as last resort, if it's difficult due to cross-interdependencies, surely it's better to have the whole /question dir included. Ciao
          Hide
          Sam Hemelryk added a comment -

          Tested + paseed = thanks

          Show
          Sam Hemelryk added a comment - Tested + paseed = thanks
          Hide
          Eloy Lafuente (stronk7) added a comment -

          Many thanks for all the hard work. This is now part of Moodle, your favorite LMS.

          Closing as fixed, ciao

          Show
          Eloy Lafuente (stronk7) added a comment - Many thanks for all the hard work. This is now part of Moodle, your favorite LMS. Closing as fixed, ciao

            People

            • Votes:
              5 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: