Uploaded image for project: 'Moodle'
  1. Moodle
  2. MDL-48595

Log exports still consume all memory and fail

XMLWordPrintable

    • MOODLE_27_STABLE, MOODLE_28_STABLE, MOODLE_29_STABLE
    • MOODLE_29_STABLE
    • MDL-48595_master
    • Hide

      Sorry, but we need to test this in all supported db drivers.

      • Run phpunit
      • Run behat
      1. Enable log legacy (also enabling Log legacy data setting inside logstore_legacy config page), standard log and external database log (setting up an external log database)
      2. Generate a loooooot of logs (you can use the attached filldb.php, setting $i to whatever you feel like) I have around 350000 but you should have enough with quite less, try with 100000 at least.
      1. Performance test
        1. Set your memory_limit in php.ini (the apache one) to 32MB (all steps should also work setting it to 64MB)
        2. Go to Course -> Reports -> Log
        3. For each store
          1. Check that you can download logs (will probably take a while) and the number of rows is not limited by the pagination (will return more than 100 results)
        4. (to check that this solves the perofrmance issue) Edit report_log_table_log::query_db() function (report/log/classes/table_log.php) changing get_events_select_iterator call for a call to get_events_select
          1. Click on download logs (the format is not important now)
          2. You will probably ran out of memory and you will receive a PHP error instead of being able to download the logs, if you don't, reduce your memory_limit even more and repeat the steps above to ensure the fix fixes the problem
      2. Regression test
        1. Go to Course -> Reports -> Log
        2. For each store
          1. Check that you can see all different stores logs
          2. Move to next page, to another page selecting it by it's number...
          3. It SHOULD all work
          4. Check that all columns contains valid links
          5. Download logs using all available formats
          6. You SHOULD not have any problem
        3. Go to Course -> Reports -> Live log
        4. Check that you can see all store logs properly
        5. Create a new logstore implementing the deprecated interfaces
        6. Check that it works
      Show
      Sorry, but we need to test this in all supported db drivers. Run phpunit Run behat Enable log legacy (also enabling Log legacy data setting inside logstore_legacy config page), standard log and external database log (setting up an external log database) Generate a loooooot of logs (you can use the attached filldb.php, setting $i to whatever you feel like) I have around 350000 but you should have enough with quite less, try with 100000 at least. Performance test Set your memory_limit in php.ini (the apache one) to 32MB (all steps should also work setting it to 64MB) Go to Course -> Reports -> Log For each store Check that you can download logs (will probably take a while) and the number of rows is not limited by the pagination (will return more than 100 results) (to check that this solves the perofrmance issue) Edit report_log_table_log::query_db() function (report/log/classes/table_log.php) changing get_events_select_iterator call for a call to get_events_select Click on download logs (the format is not important now) You will probably ran out of memory and you will receive a PHP error instead of being able to download the logs, if you don't, reduce your memory_limit even more and repeat the steps above to ensure the fix fixes the problem Regression test Go to Course -> Reports -> Log For each store Check that you can see all different stores logs Move to next page, to another page selecting it by it's number... It SHOULD all work Check that all columns contains valid links Download logs using all available formats You SHOULD not have any problem Go to Course -> Reports -> Live log Check that you can see all store logs properly Create a new logstore implementing the deprecated interfaces Check that it works
    • Team Beards Sprint 4
    • Large

      I see MDL-34867 is closed, but unfortunately things have actually got worse since.

      The new logging API, via get_events_select, requires fetching all events in one batch, and this tends to consume boatloads of memory as they are now huge event objects. Alternatively, some sections of code have hacks to do this in pieces, resulting in many expensive logging table queries.

      Use cases we are seeing involve 500000 or more records, so the all-in-one-go model used frequently in code paths such as log report just do not work. We have previously provided anonymised databases to demonstrate the general scale required.

      Anyway, I'm providing the latest dirty trick we need to make this functionality work /some/ of the time at least, by allowing get_events_select to return an iteratable list. As the consumers of this may loop multiple times, we also need a closure which generates the recordset, and run this again on rewind()...

      Hope somebody at HQ eventually understands how broken the scaling of these core components is before yet another rewrite...

        1. filldb.php
          0.3 kB
          David Monllaó

            dmonllao David Monllaó
            tlevi Tony Levi
            Zachary Durber Zachary Durber
            Dan Poltawski Dan Poltawski
            Jetha Chan Jetha Chan
            Votes:
            3 Vote for this issue
            Watchers:
            13 Start watching this issue

              Created:
              Updated:
              Resolved:

                Error rendering 'clockify-timesheets-time-tracking-reports:timer-sidebar'. Please contact your Jira administrators.