Moodle
  1. Moodle
  2. MDL-23603

Course Participation Report can bring down the site

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 1.9.9, 2.0
    • Fix Version/s: 1.9.10
    • Labels:
      None
    • Environment:
      Discovered on MySQL, but inefficient SQL likely causes performance issues on other DBMSes also
    • Database:
      Any
    • Affected Branches:
      MOODLE_19_STABLE, MOODLE_20_STABLE
    • Fixed Branches:
      MOODLE_19_STABLE
    • Rank:
      27392

      Description

      Running a Course Participation report can bring down a Moodle site, if the log table is large enough. On an instance with ~18.8 million rows in the log table, the entire site slows to a crawl and then flicks over to giving the "Error: Database connection failed" error screen (screenshot attached).

      I've been able to improve the performance considerably by removing the join to the log table in the query that's only used for counting users (as it was completely unnecessary), and switching the queries to use subqueries to filter large tables before joining them to other tables, rather than joining to the entire table and then filtering the result of the join. On my test environment, I managed to go from ~15 seconds to run a report to <0.5 seconds.

      Patches attached for both 1.9.9 and 2.0 - it could probably be backported into earlier versions easily also (I haven't tried). The patches use subqueries for both the log table and the role_assignments table, though the performance gain on the latter may be negligible unless you have many thousands of users, each enrolled in many, many courses...

      1. MDL-23603.20100923.m19.patch
        3 kB
        Sam Hemelryk
      2. MDL-23603.20100923.m20.patch
        3 kB
        Sam Hemelryk
      3. moodle19-participationreport.patch
        3 kB
        Paul Nicholls
      4. moodle20-participationreport.patch
        3 kB
        Paul Nicholls
      1. error.jpg
        28 kB

        Activity

        Hide
        Sam Hemelryk added a comment -

        Hi Paul,

        Thanks for the patch.

        First up I've added our DB experts as watchers to this issue as I am certainly no DB expert myself.
        I've just been running some tests myself to see what the performance was like around the two queries you are patching.
        Unfortunately I didn't have a test environment with quite enough authentic users + log entries as required so I have set one up with script generated information.
        All tests were conducted as an admin user on a fresh installation of Moodle 2.0 with both fresh install of Postgres and MySQL running on Ubuntu 10.04.
        For the tests I timed how long it took to execute both sql queries.

        Postgres - 8.4

        1'500'000 Relevant log records + 10 enrolled participating users

        Attempt Before patch (seconds) After patch (seconds)
        1 2.12417s 1.23815s
        2 2.09234s 1.23219s
        3 2.0825s 1.21625s
        4 2.13202s 1.2059s
        5 2.09544s 1.21678s
        Ave 2.105294s 1.221854s

        10'000'000 Relevant log records + 10 enrolled participating users

        Attempt Before patch (seconds) After patch (seconds)
        1 15.75244s 8.47304s
        2 15.1433s 8.40882s
        3 15.1745s 8.49321s
        4 15.53962s 8.44524s
        5 15.19778s 8.39072s
        Ave 15.361524s 8.442206s

        MySQL - 5.0 + InnoDB

        1'500'000 Relevant log records + 10 enrolled participating users

        Attempt Before patch (seconds) After patch (seconds)
        1 5.47183s 8.66974s
        2 5.76026s 8.49075s
        3 5.30905s 8.55524s
        4 5.47588s 8.63208s
        5 5.62312s 8.81061s
        Ave 5.528028s 8.631684s

        10'000'000 Relevant log records + 10 enrolled participating users

        Attempt Before patch (seconds) After patch (seconds)
        1 38.58146s 601.75725s
        2 37.29972s 593.24538s
        3 37.21027s 599.72463s
        4 37.01103s 582.52457s
        5 36.98076s 587.07681
        Ave 37.416648s 592.865728s

        By the looks of it to me the patch is a + for Postgres and a - for MySQL with the 10'000'000 records practically killing MySQL.

        Now that all being said I really don't know how relevant my testing is, I didn't have a large number of enrolled users although the site itself has 1000+ users created.
        Perhaps Eloy/Petr could take a look at the patch and let us know there thoughts as they will have a much better idea about the implications of such changes.

        Also Paul can you provide us with some more information about the test environment you are using?

        Cheers
        Sam

        Show
        Sam Hemelryk added a comment - Hi Paul, Thanks for the patch. First up I've added our DB experts as watchers to this issue as I am certainly no DB expert myself. I've just been running some tests myself to see what the performance was like around the two queries you are patching. Unfortunately I didn't have a test environment with quite enough authentic users + log entries as required so I have set one up with script generated information. All tests were conducted as an admin user on a fresh installation of Moodle 2.0 with both fresh install of Postgres and MySQL running on Ubuntu 10.04. For the tests I timed how long it took to execute both sql queries. Postgres - 8.4 1'500'000 Relevant log records + 10 enrolled participating users Attempt Before patch (seconds) After patch (seconds) 1 2.12417s 1.23815s 2 2.09234s 1.23219s 3 2.0825s 1.21625s 4 2.13202s 1.2059s 5 2.09544s 1.21678s Ave 2.105294s 1.221854s 10'000'000 Relevant log records + 10 enrolled participating users Attempt Before patch (seconds) After patch (seconds) 1 15.75244s 8.47304s 2 15.1433s 8.40882s 3 15.1745s 8.49321s 4 15.53962s 8.44524s 5 15.19778s 8.39072s Ave 15.361524s 8.442206s MySQL - 5.0 + InnoDB 1'500'000 Relevant log records + 10 enrolled participating users Attempt Before patch (seconds) After patch (seconds) 1 5.47183s 8.66974s 2 5.76026s 8.49075s 3 5.30905s 8.55524s 4 5.47588s 8.63208s 5 5.62312s 8.81061s Ave 5.528028s 8.631684s 10'000'000 Relevant log records + 10 enrolled participating users Attempt Before patch (seconds) After patch (seconds) 1 38.58146s 601.75725s 2 37.29972s 593.24538s 3 37.21027s 599.72463s 4 37.01103s 582.52457s 5 36.98076s 587.07681 Ave 37.416648s 592.865728s By the looks of it to me the patch is a + for Postgres and a - for MySQL with the 10'000'000 records practically killing MySQL. Now that all being said I really don't know how relevant my testing is, I didn't have a large number of enrolled users although the site itself has 1000+ users created. Perhaps Eloy/Petr could take a look at the patch and let us know there thoughts as they will have a much better idea about the implications of such changes. Also Paul can you provide us with some more information about the test environment you are using? Cheers Sam
        Hide
        Sam Hemelryk added a comment -

        Ohhh forgot to add my testing was done on Moodle 2.0.

        Show
        Sam Hemelryk added a comment - Ohhh forgot to add my testing was done on Moodle 2.0.
        Hide
        Paul Nicholls added a comment -

        Hi Sam,
        We discovered the issue on our production Moodle 1.9.9 environment, running on MySQL 5 (MyISAM - we're going to convert to InnoDB when we upgrade to 2.0, or possibly before if we can be convinced that it'll make a big enough difference). The 18.8 million records was the total number of records in the log table - nowhere near all of them would have been relevant, as we have nearly 2,000 courses (not all active at the same time) with anywhere from a couple of dozen to a few hundred students in each. We saw a huge improvement in performance, both on my Windows testing/development environment and on the production linux environment.

        I'll do some testing with the same data on 2.0 on both MyISAM and InnoDB - but it'll take a while, as the 2.0 upgrade takes forever to stretch the "ip" column in such a table of this size to a maximum of 45 characters (I had to stop it as my PC slowed to a crawl trying to do this on my local development environment - perhaps I'll run it overnight, but it may prove troublesome when we come to do the upgrade for real...)

        Was the MySQL instance you were using running on an optimised or default config? Could it be that your MySQL config is causing some kind of bottleneck in the new approach? It might pay to check the Performance page's MySQL section to see if you can improve your config: http://docs.moodle.org/en/Performance#MySQL_performance

        Cheers,
        Paul

        Show
        Paul Nicholls added a comment - Hi Sam, We discovered the issue on our production Moodle 1.9.9 environment, running on MySQL 5 (MyISAM - we're going to convert to InnoDB when we upgrade to 2.0, or possibly before if we can be convinced that it'll make a big enough difference). The 18.8 million records was the total number of records in the log table - nowhere near all of them would have been relevant, as we have nearly 2,000 courses (not all active at the same time) with anywhere from a couple of dozen to a few hundred students in each. We saw a huge improvement in performance, both on my Windows testing/development environment and on the production linux environment. I'll do some testing with the same data on 2.0 on both MyISAM and InnoDB - but it'll take a while, as the 2.0 upgrade takes forever to stretch the "ip" column in such a table of this size to a maximum of 45 characters (I had to stop it as my PC slowed to a crawl trying to do this on my local development environment - perhaps I'll run it overnight, but it may prove troublesome when we come to do the upgrade for real...) Was the MySQL instance you were using running on an optimised or default config? Could it be that your MySQL config is causing some kind of bottleneck in the new approach? It might pay to check the Performance page's MySQL section to see if you can improve your config: http://docs.moodle.org/en/Performance#MySQL_performance Cheers, Paul
        Hide
        Sam Hemelryk added a comment -

        Hi Paul,

        Thanks for the reply and continuing to help out through testing and feedback.

        Indeed MySQL and Postgres are running with the default configurations, the only changes that have been made to them are for authentication and connection.
        We aim to keep everything in working order with the default configurations that get provided with the database engines, I suppose there is a little flexibility to require users with large installations to tweak their config but I imagine we'd like to avoid it if possible.

        That being said I've spent this morning looking at the query and how it could be optimised and think I've come up with an solution based on the changes that you've made that fixes this issue up as described below.
        I've attached two patches one for 2.0 (m20) and one for 1.9 (m19), if you are happy to would you be able to test those for me and see whether it still solves the problem on your test environment as I have no doubt your test environment is WAY more accurate to a real system than mine

        The changes that I have made are all centered around the subselect on the log table. The following table shows the results I got when running the completed query through MySQL's command line with the EXPLAIN EXTENDED keywords:

        id select_type table type possible_keys key key_len ref rows filtered Extra
        1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 100.00 Using temporary; Using filesort
        1 PRIMARY u eq_ref PRIMARY PRIMARY 8 ra.userid 1 100.00  
        1 PRIMARY <derived3> ALL NULL NULL NULL NULL 10049917 100.00  
        3 DERIVED log ref log_tim_ix,log_act_ix,log_cmi_ix log_act_ix 42   5025347 100.00 Using where
        2 DERIVED role_assignments ref roleassi_rol_ix,roleassi_con_ix roleassi_rol_ix 8   49 100.00 Using where

        Although I'm not 100% about how to interpret it all I'm pretty confident in saving that the derived log table containing the full recordset is our problem.
        The solution I came up with was to take the modifications you had made and then move the calculations to the log tables subselect to reduce the calculated record set from the full log table to the number of users.

        The following was the analysis of the query after doing so:

        id select_type table type possible_keys key key_len ref rows filtered Extra
        1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 100.00 Using temporary; Using filesort
        1 PRIMARY <dervied3> ALL NULL NULL NULL NULL 10 100.00  
        1 PRIMARY u eq_ref PRIMARY PRIMARY 8 ra.userid 1 100.00  
        3 DERIVED log ref log_tim_ix,log_act_ix,log_cmi_ix log_act_ix 42   5025347 100.00 Using where; Using temporary; Using filesort
        2 DERIVED role_assignments ref roleassi_rol_ix,roleassi_con_ix roleassi_rol_ix 8   49 100.00 Using where

        I tested the query on both PG and MySQL with 10'000'000 records and got the following results:

        Attempt Postgres MySQL
        1 3.85445 18.61589
        2 3.85558 16.8923
        3 3.85844 16.03433
        4 3.80553 15.86284
        5 3.82064 16.08453

        Whilst it seems to work pretty well on the test environment I have(lots of logs, limited users) I would be very keen to know whether it is going to stand up when paired with larger numbers of users and ra records.

        BTW: Again this was tested on Moodle 2.0 for performance. I did test the 1.9 patch to ensure I got a result on MySQL but I haven't tested it for performance or cross DB on 1.9.

        Cheers
        Sam

        Show
        Sam Hemelryk added a comment - Hi Paul, Thanks for the reply and continuing to help out through testing and feedback. Indeed MySQL and Postgres are running with the default configurations, the only changes that have been made to them are for authentication and connection. We aim to keep everything in working order with the default configurations that get provided with the database engines, I suppose there is a little flexibility to require users with large installations to tweak their config but I imagine we'd like to avoid it if possible. That being said I've spent this morning looking at the query and how it could be optimised and think I've come up with an solution based on the changes that you've made that fixes this issue up as described below. I've attached two patches one for 2.0 (m20) and one for 1.9 (m19), if you are happy to would you be able to test those for me and see whether it still solves the problem on your test environment as I have no doubt your test environment is WAY more accurate to a real system than mine The changes that I have made are all centered around the subselect on the log table. The following table shows the results I got when running the completed query through MySQL's command line with the EXPLAIN EXTENDED keywords: id select_type table type possible_keys key key_len ref rows filtered Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 100.00 Using temporary; Using filesort 1 PRIMARY u eq_ref PRIMARY PRIMARY 8 ra.userid 1 100.00   1 PRIMARY <derived3> ALL NULL NULL NULL NULL 10049917 100.00   3 DERIVED log ref log_tim_ix,log_act_ix,log_cmi_ix log_act_ix 42   5025347 100.00 Using where 2 DERIVED role_assignments ref roleassi_rol_ix,roleassi_con_ix roleassi_rol_ix 8   49 100.00 Using where Although I'm not 100% about how to interpret it all I'm pretty confident in saving that the derived log table containing the full recordset is our problem. The solution I came up with was to take the modifications you had made and then move the calculations to the log tables subselect to reduce the calculated record set from the full log table to the number of users. The following was the analysis of the query after doing so: id select_type table type possible_keys key key_len ref rows filtered Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 100.00 Using temporary; Using filesort 1 PRIMARY <dervied3> ALL NULL NULL NULL NULL 10 100.00   1 PRIMARY u eq_ref PRIMARY PRIMARY 8 ra.userid 1 100.00   3 DERIVED log ref log_tim_ix,log_act_ix,log_cmi_ix log_act_ix 42   5025347 100.00 Using where; Using temporary; Using filesort 2 DERIVED role_assignments ref roleassi_rol_ix,roleassi_con_ix roleassi_rol_ix 8   49 100.00 Using where I tested the query on both PG and MySQL with 10'000'000 records and got the following results: Attempt Postgres MySQL 1 3.85445 18.61589 2 3.85558 16.8923 3 3.85844 16.03433 4 3.80553 15.86284 5 3.82064 16.08453 Whilst it seems to work pretty well on the test environment I have(lots of logs, limited users) I would be very keen to know whether it is going to stand up when paired with larger numbers of users and ra records. BTW: Again this was tested on Moodle 2.0 for performance. I did test the 1.9 patch to ensure I got a result on MySQL but I haven't tested it for performance or cross DB on 1.9. Cheers Sam
        Hide
        Paul Nicholls added a comment -

        Hi Sam,
        A quick test on my Moodle 1.9.9 instance (MySQL / MyISAM again) shows that it does indeed give the performance boost - it went from (with neither my patch nor yours) about 30 seconds from clicking the button to having the page finish loading, down to 2 seconds or less (sorry, I didn't do precise timings). That seems about the same as my original patch, so if you're seeing good performance across all your test environments, we may be onto a winner.

        I'm going to attempt a 2.0 upgrade over the weekend (since it takes so long to increase the max field size of the "ip" column in the log table from 15 to 45 characters), and can then try your 2.0 patch on the same data on Monday (all things going well, upgrade-wise). I'm also going to perform a conversion to InnoDB, and will try it again on there just to make sure nothing funny is going on, but I imagine it should be fine there, too.

        Cheers,
        Paul

        Show
        Paul Nicholls added a comment - Hi Sam, A quick test on my Moodle 1.9.9 instance (MySQL / MyISAM again) shows that it does indeed give the performance boost - it went from (with neither my patch nor yours) about 30 seconds from clicking the button to having the page finish loading, down to 2 seconds or less (sorry, I didn't do precise timings). That seems about the same as my original patch, so if you're seeing good performance across all your test environments, we may be onto a winner. I'm going to attempt a 2.0 upgrade over the weekend (since it takes so long to increase the max field size of the "ip" column in the log table from 15 to 45 characters), and can then try your 2.0 patch on the same data on Monday (all things going well, upgrade-wise). I'm also going to perform a conversion to InnoDB, and will try it again on there just to make sure nothing funny is going on, but I imagine it should be fine there, too. Cheers, Paul
        Hide
        Paul Nicholls added a comment -

        Hi Sam,
        Sorry for the lack of updates - I started the InnoDB migration last week, but it was taking a long time on my development box due to the massive log table (which is, naturally, the most important part of it...) - and then got cut off on Friday night by an automated restart for Windows updates! I've managed to get IT to change my machine to manual installation of updates, so that shouldn't happen again - I'll kick off the upgrade again tonight, and hopefully I'll have something to test on soon. Then we've got more waiting on our hands for the Moodle 2 upgrade to test on that... depending on the 1.9 InnoDB performance, I'll probably only do one Moodle 2 test run, rather than one on InnoDB and one on MyISAM. I'll update again if/when I have any progress.

        -Paul

        Show
        Paul Nicholls added a comment - Hi Sam, Sorry for the lack of updates - I started the InnoDB migration last week, but it was taking a long time on my development box due to the massive log table (which is, naturally, the most important part of it...) - and then got cut off on Friday night by an automated restart for Windows updates! I've managed to get IT to change my machine to manual installation of updates, so that shouldn't happen again - I'll kick off the upgrade again tonight, and hopefully I'll have something to test on soon. Then we've got more waiting on our hands for the Moodle 2 upgrade to test on that... depending on the 1.9 InnoDB performance, I'll probably only do one Moodle 2 test run, rather than one on InnoDB and one on MyISAM. I'll update again if/when I have any progress. -Paul
        Hide
        Sam Hemelryk added a comment -

        Hi Paul,

        Not a worry, let me know how you get on and if there is anything I can help with.
        We've just been through the process of upgrading Moodle.org to 2.0 and I know what an arduous journey it can be.

        Cheers
        Sam

        Show
        Sam Hemelryk added a comment - Hi Paul, Not a worry, let me know how you get on and if there is anything I can help with. We've just been through the process of upgrading Moodle.org to 2.0 and I know what an arduous journey it can be. Cheers Sam
        Hide
        Paul Nicholls added a comment -

        Hi Sam,
        So far, I can say that the patch works well on 1.9 on InnoDB - though I've encountered another problem whereby the following query takes a long time:
        get_field_sql('SELECT min(time) FROM '.$CFG->prefix.'log WHERE course = '.$course->id)
        However, I think that's mostly down to my configuration and my development environment - especially since after a restart I got one or two quick runs before it slowed down. An index on the "time" field might help, though - but that would require further testing and would probably take a long time to add on such a large table (I might experiment with this).

        Next up: Moodle 2 upgrade. I'll kick that off tonight (and disable my overnight scheduled task to try to get as much uninterrupted execution time as possible!) and let you know how I get on. I think it's pretty safe to say that the issue raised in this ticket is fixed by your patch, though - whether or not the time query above is a problem or not on InnoDB is another issue.

        -Paul

        Show
        Paul Nicholls added a comment - Hi Sam, So far, I can say that the patch works well on 1.9 on InnoDB - though I've encountered another problem whereby the following query takes a long time: get_field_sql('SELECT min(time) FROM '.$CFG->prefix.'log WHERE course = '.$course->id) However, I think that's mostly down to my configuration and my development environment - especially since after a restart I got one or two quick runs before it slowed down. An index on the "time" field might help, though - but that would require further testing and would probably take a long time to add on such a large table (I might experiment with this). Next up: Moodle 2 upgrade. I'll kick that off tonight (and disable my overnight scheduled task to try to get as much uninterrupted execution time as possible!) and let you know how I get on. I think it's pretty safe to say that the issue raised in this ticket is fixed by your patch, though - whether or not the time query above is a problem or not on InnoDB is another issue. -Paul
        Hide
        Paul Nicholls added a comment -

        Just an update on the time query - I've had a quick play, with the following results:

        Original query:
        mysql> select min(time) from mdl_log where course=1965 limit 1;
        ------------

        min(time)

        ------------

        1265235082

        ------------
        1 row in set (1 min 55.11 sec)

        Rejigged query, using order by and limit but not min():
        mysql> select time from mdl_log where course=1965 order by time asc limit 1;
        ------------

        time

        ------------

        1265235082

        ------------
        1 row in set (0.42 sec)

        MyISAM seems to have the same problem, though not to anywhere near the same degree (except on a cold start). It seems to receive a speed boost using the rejigged query, too.
        MyISAM - cold start:
        mysql> select min(time) from mdl_log where course=1965 limit 1;
        ------------

        min(time)

        ------------

        1265235082

        ------------
        1 row in set (2 min 0.36 sec)

        MyISAM - subsequent runs:
        mysql> select min(time) from mdl_log where course=1965 limit 1;
        ------------

        min(time)

        ------------

        1265235082

        ------------
        1 row in set (1.16 sec)

        MyISAM - rejigged query:
        mysql> select time from mdl_log where course=1965 order by time asc limit 1;
        ------------

        time

        ------------

        1265235082

        ------------
        1 row in set (0.20 sec)

        Are you able to test this using your test data on postgres, to see if there's any difference in speed between the two queries (obviously substituting in a course ID with a sizable number of log entries)?

        Cheers,
        Paul

        Show
        Paul Nicholls added a comment - Just an update on the time query - I've had a quick play, with the following results: Original query: mysql> select min(time) from mdl_log where course=1965 limit 1; ------------ min(time) ------------ 1265235082 ------------ 1 row in set (1 min 55.11 sec) Rejigged query, using order by and limit but not min(): mysql> select time from mdl_log where course=1965 order by time asc limit 1; ------------ time ------------ 1265235082 ------------ 1 row in set (0.42 sec) MyISAM seems to have the same problem, though not to anywhere near the same degree (except on a cold start). It seems to receive a speed boost using the rejigged query, too. MyISAM - cold start: mysql> select min(time) from mdl_log where course=1965 limit 1; ------------ min(time) ------------ 1265235082 ------------ 1 row in set (2 min 0.36 sec) MyISAM - subsequent runs: mysql> select min(time) from mdl_log where course=1965 limit 1; ------------ min(time) ------------ 1265235082 ------------ 1 row in set (1.16 sec) MyISAM - rejigged query: mysql> select time from mdl_log where course=1965 order by time asc limit 1; ------------ time ------------ 1265235082 ------------ 1 row in set (0.20 sec) Are you able to test this using your test data on postgres, to see if there's any difference in speed between the two queries (obviously substituting in a course ID with a sizable number of log entries)? Cheers, Paul
        Hide
        Paul Nicholls added a comment -

        Sorry, I see the mysql result output has ended up rather messy in here - I've just spotted the little help icon beside the comment field, with instructions for various formatting. I'll use that if I post any more - but you can hopefully understand what's happening above.

        -Paul

        Show
        Paul Nicholls added a comment - Sorry, I see the mysql result output has ended up rather messy in here - I've just spotted the little help icon beside the comment field, with instructions for various formatting. I'll use that if I post any more - but you can hopefully understand what's happening above. -Paul
        Hide
        Sam Hemelryk added a comment -

        Hi Paul,

        Thanks for raising the time issue. I've just been running tests and checking this out and it looks like there is already an index on the time field in 1.9 and 2.0.
        On Postgres I got the following results with 10 million records:

        Query Result Time
        select min(time) from mdl_log where course=2 limit 1; 1285059560 113 ms
        select min(time) from mdl_log where course=2 limit 1; 1285059560 15 ms
        select time from mdl_log where course=2 order by time asc limit 1 1285059560 102 ms
        select time from mdl_log where course=2 order by time asc limit 1 1285059560 12 ms

        The first instance of each query was performed after a Postgres was stopped, system caches cleared and pg started again.
        Either way the impact on Postgres is tiny, less than a second in all cases.
        None the less I think I will create another issue for this and look at it there once things settle down here.

        Best of luck with your upgrade tonight, I'm not sure if I've mentioned it already or not but if you haven't set innodb configuration params in your MySQL conf I would highly recommend that.
        I know with Moodle.org using the default innodb settings it would've taken well over 24 hours, whereas after setting them higher we got it down to less than an hour on a dedicated server.

        Cheers
        Sam

        Show
        Sam Hemelryk added a comment - Hi Paul, Thanks for raising the time issue. I've just been running tests and checking this out and it looks like there is already an index on the time field in 1.9 and 2.0. On Postgres I got the following results with 10 million records: Query Result Time select min(time) from mdl_log where course=2 limit 1; 1285059560 113 ms select min(time) from mdl_log where course=2 limit 1; 1285059560 15 ms select time from mdl_log where course=2 order by time asc limit 1 1285059560 102 ms select time from mdl_log where course=2 order by time asc limit 1 1285059560 12 ms The first instance of each query was performed after a Postgres was stopped, system caches cleared and pg started again. Either way the impact on Postgres is tiny, less than a second in all cases. None the less I think I will create another issue for this and look at it there once things settle down here. Best of luck with your upgrade tonight, I'm not sure if I've mentioned it already or not but if you haven't set innodb configuration params in your MySQL conf I would highly recommend that. I know with Moodle.org using the default innodb settings it would've taken well over 24 hours, whereas after setting them higher we got it down to less than an hour on a dedicated server. Cheers Sam
        Hide
        Sam Hemelryk added a comment -

        The patch has been commit to 1.9 and 2.0.

        Cheers Paul for all the fantastic help

        Show
        Sam Hemelryk added a comment - The patch has been commit to 1.9 and 2.0. Cheers Paul for all the fantastic help
        Hide
        Paul Nicholls added a comment -

        Hi Sam,
        Which innodb configuration params are you referring to? The MySQL section of the performance page on the wiki doesn't mention InnoDB (except turning it off if unused)...

        Cheers,
        Paul

        Show
        Paul Nicholls added a comment - Hi Sam, Which innodb configuration params are you referring to? The MySQL section of the performance page on the wiki doesn't mention InnoDB (except turning it off if unused)... Cheers, Paul
        Hide
        Sam Hemelryk added a comment -

        Hi Paul,

        The following are the settings I changed when testing the upgrade performance.

        # MySQL 5.1.41
        
        # General changes
        key_buffer		= 1024M
        max_allowed_packet	= 64M
        query_cache_limit	= 16M
        query_cache_size        = 256M
        
        # InnoDB specific
        innodb_buffer_pool      = 1800M
        innodb_log_file_size    = 512M
        

        The system I was testing on had 8GB of memory, you may need to alter if you have more/less.

        Hope that helps.

        Cheers
        Sam

        Show
        Sam Hemelryk added a comment - Hi Paul, The following are the settings I changed when testing the upgrade performance. # MySQL 5.1.41 # General changes key_buffer = 1024M max_allowed_packet = 64M query_cache_limit = 16M query_cache_size = 256M # InnoDB specific innodb_buffer_pool = 1800M innodb_log_file_size = 512M The system I was testing on had 8GB of memory, you may need to alter if you have more/less. Hope that helps. Cheers Sam

          People

          • Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: