Uploaded image for project: 'Moodle'
  1. Moodle
  2. MDL-15233

Calculated Questions: Tolerance is applied inconsistently to datasets and to student responses

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 1.8
    • Fix Version/s: None
    • Component/s: Questions, Quiz
    • Labels:
      None
    • Environment:
      Not sure (not an administrator)
    • Affected Branches:
      MOODLE_18_STABLE

      Description

      Summary:
      For a dataset answer, the relative tolerance is applied to the correct answer after rounding it (as specified by "Correct answer shows"). For grading, however, the relative tolerance is applied to the correct answer before rounding. This is problem is most likely to be noticed when the tolerance is low.

      Description:
      As an example, suppose that a dataset item is 3.23/10.1 = 0.31980198, and you specify that the "correct answer shows" 3 significant figures, and that the relative tolerance is 0.00001. In the "edit the datasets" page, it is displayed as 0.320. The (min,max) = (0.3199968, 0.3200032). According to the datasets page, the tolerance is applied to the value 0.320. Therefore entering the value "0.320" would be correct, but 0.3198 wold be incorrect.
      What happens when you preview the questions? The tolerance is applied to the value 0.31980198, so (min,max) = (0.319798782, 0.319805178). As a result, entering 0.320 is incorrect.
      This is a bug because the user (teacher) cannot rely on the information in the datasets page to reflect how the question is graded. Personally, I prefer the way that the datasets page applies the tolerance, because in analytical chemistry, reporting extra decimals is incorrect. But I can see that other users would prefer the latter way of applying tolerances. Either way, the tolerances should be applied consistently.

      Reproduce:
      Define a new calculated question with the variable

      {a}. Set:
      "Correct Answer formula =" : {a}

      /10.1
      "Tolerance ± " : 0.00001
      "Tolerance Type" : relative
      "Correct answer shows" : 3
      "Format" : significant figures

      On the next page, select "will use the same existing private dataset as before"

      On the next page, add a new item:
      "Param

      {a}" : 3.23
      "Range of Values" : 1.01 - 8.01
      "Decimal Places" : 2
      "Distribution" : uniform

      Add this item, and you will get:
      Param {a}

      {a}

      /10.1 3.23/10.1 = 0.320
      Min: 0.3199968---Max: 0.3200032

      Actual Results:
      Save changes. Now preview the question. Do the following tests:
      1. Enter the answer "0.320" and click submit (I did this in adaptive mode). It will be marked incorrect. Click start over.
      2. Click Fill with correct. The answer box is filled with "0.320." Click submit. Again, it is marked incorrect. Click start over.
      3. Enter the answer "0.319802" and click submit. It will be marked correct.

      In doing this, did you notice another bug? When it is marked incorrect in tests 1 and 2, it gets a green check mark! The inattentive student thinks it was correct, but got no points! When it is marked correct in test 3, it gets a red x! Now do a final test:

      4. Enter the answer "1000" and click submit. It will be marked incorrect. In addition, it gets a red x, as expected.

      Evidently, the visual feedback (check and x) are decided by the datasets (min,max), not by the grading (min,max).

      Expected Results:
      I expected that any response between the min and max (as listed in the datasets page) would be marked correct. Someone else expected it, too, because the checkmark and x match this expectation. The reason I prefer this approach is that I am teaching analytical chemistry, so the datasets (min,max) marks students wrong if they report more digits than they should. (The current, grader's behavior rewards students who report too many digits). I understand if other users prefer the current behavior, but then the datasets page should reflect this behavior consistently.

      Kenyon College recently upgraded to Moodle 1.8, and I have only tried this out on 1.8, so I don't know if this was occurring on 1.7. Sorry I can't give any information about server environment, as I am a user not administrator. I am sending this information to the administrators as well – maybe they can chime in with the appropriate information.

      This issue is similar to this problem: MDL-6879. In this case, however, the problem is not a misunderstanding of tolerance. It is that there are two ways to apply tolerance, and they are not applied consistently by different parts of the software.

        Gliffy Diagrams

          Attachments

          1. Dataset page.jpg
            Dataset page.jpg
            22 kB
          2. Define question.jpg
            Define question.jpg
            31 kB
          3. qtype_datasetdependent.php
            3 kB
          4. screenshot-1.jpg
            screenshot-1.jpg
            70 kB
          5. screenshot-2.jpg
            screenshot-2.jpg
            97 kB
          6. screenshot-3.jpg
            screenshot-3.jpg
            152 kB
          7. screenshot-4.jpg
            screenshot-4.jpg
            144 kB
          8. screenshot-5.jpg
            screenshot-5.jpg
            137 kB
          9. screenshot-6.jpg
            screenshot-6.jpg
            250 kB
          10. screenshot-7.jpg
            screenshot-7.jpg
            153 kB
          11. test.xml
            2 kB

            Issue Links

              Activity

                People

                • Votes:
                  1 Vote for this issue
                  Watchers:
                  6 Start watching this issue

                  Dates

                  • Created:
                    Updated:
                    Resolved: