Uploaded image for project: 'Moodle'
  1. Moodle
  2. MDL-67274

Tasks: Log display fails with memory errors

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Minor Minor
    • 3.7.4, 3.8.1
    • 3.7.3
    • Tasks
    • MOODLE_37_STABLE
    • MOODLE_37_STABLE, MOODLE_38_STABLE
    • MDL-67274-master
    • Hide

      You will need direct database access to do this test.

      NOTE: This test reproduces the problem with a single large log, because that's easier to set up, but the problem actually appears with multiple smaller logs that add up to a lot.

      NOTE: The SQL REPEAT function used here works on Postgres and, allegedly, on MySQL.

      1. Go to the task logs page on your test server's site administration (Server / Tasks / Task logs).
      2. Hopefully you have at least 2 pages of task logs. If you don't, run cron a few times.
      3. Go to page 2 of the list.
      4. Select one of the tasks at random. Mouse over the magnifier icon - your browser should show a URL such as /admin/tasklogs.php?logid=621. Make a note of the number (621 here).
      5. In your database, run the following query, replacing the number 621 with the correct one and the PREFIX with your database table prefix. This is intended to set the output field for this log task to about 150MB of data.
        • Postgres: update PREFIX_task_log set output=REPEAT(E'I am a large log file 32 bytes.\n', 5000000) where id=621
        • MySQL: update PREFIX_task_log set output=REPEAT('I am a large log file 32 bytes.\n', 5000000) where id=621 (maybe)
      6. Reload the task log page.
        • EXPECTED: It should re-display, looking the same as before (and not, for example, crash with a memory error).
      7. Click the magnifier icon next to the log you changed.
        • EXPECTED: The very large log file should display in the browser window (and not give a memory error).

      Note: There is still a limit on the size of log files displaying, but hopefully not now on the table display.

      Show
      You will need direct database access to do this test. NOTE: This test reproduces the problem with a single large log, because that's easier to set up, but the problem actually appears with multiple smaller logs that add up to a lot. NOTE: The SQL REPEAT function used here works on Postgres and, allegedly, on MySQL. Go to the task logs page on your test server's site administration ( Server / Tasks / Task logs ). Hopefully you have at least 2 pages of task logs. If you don't, run cron a few times. Go to page 2 of the list. Select one of the tasks at random. Mouse over the magnifier icon - your browser should show a URL such as /admin/tasklogs.php?logid=621 . Make a note of the number (621 here). In your database, run the following query, replacing the number 621 with the correct one and the PREFIX with your database table prefix. This is intended to set the output field for this log task to about 150MB of data. Postgres: update PREFIX_task_log set output=REPEAT(E'I am a large log file 32 bytes.\n', 5000000) where id=621 MySQL: update PREFIX_task_log set output=REPEAT('I am a large log file 32 bytes.\n', 5000000) where id=621 (maybe) Reload the task log page. EXPECTED: It should re-display, looking the same as before (and not, for example, crash with a memory error). Click the magnifier icon next to the log you changed. EXPECTED: The very large log file should display in the browser window (and not give a memory error). Note: There is still a limit on the size of log files displaying, but hopefully not now on the table display.

      If the combined output of all tasks in the logs that are shown on one page of the table is large (so as to make total size of the output from those tasks go above the default memory limit), then the log table view fails, because the database query does a 'SELECT *' (including the output TEXT field).

      Less of a problem, but, if the output of a single task is large, then viewing that single task via the preview popup also fails, because it loads it into memory in order to display it. (I don't really see a way around that one but we can increase the memory limit just in case.)

      The first bug affects our live system so it may affects other production systems too. (In reality it is likely to occur if you have a task that generates a lot of log output, and if you filter for that task - so that you then have a whole table page full of tasks with large output, which is all loaded into memory.)

      The fix appears to be simple, just don't load the 'output' field in the table query.

            quen Sam Marshall
            quen Sam Marshall
            Tim Hunt Tim Hunt
            Jake Dallimore Jake Dallimore
            Gladys Basiana Gladys Basiana
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved:

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0 minutes
                0m
                Logged:
                Time Spent - 1 hour, 10 minutes
                1h 10m

                  Error rendering 'clockify-timesheets-time-tracking-reports:timer-sidebar'. Please contact your Jira administrators.