Moodle
  1. Moodle
  2. MDL-42882

Performance improvement to missing root directory upgrade step

    Details

    • Story Points (Obsolete):
      20
    • Sprint:
      BACKEND Sprint 10

      Description

      In the update process for version 2013051402.10 there is a fix for missing root folder entries. (File: lib/db/upgrade.php line 2216)

      To find the fileareas where these entries are missing a left join is used.

      We have some installations with really large files table (more then 600 MB).
      On these installations the update fails at this point.

      Here is my attempt to get this work. Maybe it helps other people with the same problem.

          if ($oldversion < 2013051402.10) {
       
              $sql = "SELECT distinct f1.contextid, f1.component, f1.filearea, f1.itemid
                      FROM {files} f1
                      WHERE f1.component <> 'user' or f1.filearea <> 'draft'";
       
              $rs = $DB->get_recordset_sql($sql);
              $defaults = array('filepath' => '/',
                              'filename' => '.',
                              'userid' => $USER->id,
                              'filesize' => 0,
                              'timecreated' => time(),
                              'timemodified' => time(),
                              'contenthash' => sha1(''));
       
              foreach ($rs as $r) {
                  // Is there a root folder entry for that filearea?
                  $count = $DB->count_records('files', array(
                              'contextid' => $r->contextid,
                              'component' => $r->component,
                              'filearea' => $r->filearea,
                              'itemid' => $r->itemid,
                              'filename' => '.',
                              'filepath' => '/'
                              ));
                  if ($count) {
                      continue;
                  }
       
                  // There is no root folder entry for that filearea.
                  $pathhash = sha1("/$r->contextid/$r->component/$r->filearea/$r->itemid".'/.');
                  $DB->insert_record('files', (array)$r + $defaults +
                          array('pathnamehash' => $pathhash));
              }
              $rs->close();
                  // Main savepoint reached.
              upgrade_main_savepoint(true, 2013051402.10);
          }
      

        Gliffy Diagrams

          Issue Links

            Activity

            Hide
            Michael de Raadt added a comment -

            Thanks for reporting that and sharing a solution.

            The history of this is a little unclear. There have been changes in that area since the release you reported.

            Show
            Michael de Raadt added a comment - Thanks for reporting that and sharing a solution. The history of this is a little unclear. There have been changes in that area since the release you reported.
            Hide
            Andreas Grabs added a comment -

            Hi Michael,
            we don't go with every weekly update. We just do it if the minor version is changed or if some other important changes are there.
            So we had this problem just now.
            I wasn't sure that my code is ok so I haven't done a pull request yet but at least reported.
            I think there only few people will have this problem too.
            Best regards
            Andreas

            Show
            Andreas Grabs added a comment - Hi Michael, we don't go with every weekly update. We just do it if the minor version is changed or if some other important changes are there. So we had this problem just now. I wasn't sure that my code is ok so I haven't done a pull request yet but at least reported. I think there only few people will have this problem too. Best regards Andreas
            Hide
            Dan Poltawski added a comment -

            Hi Andreas,

            When you say this update step fails, I suppose you are saying the query times out? Do you have any more details about how it fails?

            Show
            Dan Poltawski added a comment - Hi Andreas, When you say this update step fails, I suppose you are saying the query times out? Do you have any more details about how it fails?
            Hide
            Andreas Grabs added a comment -

            Hi Dan,
            the problem wasn't in moodle itself but in mysql. The result of the join was to big so mysql left stuck in 100% cpu. Unfortunately I have no more detailed information but the solution above solved the problem.
            Best regards
            Andreas

            Show
            Andreas Grabs added a comment - Hi Dan, the problem wasn't in moodle itself but in mysql. The result of the join was to big so mysql left stuck in 100% cpu. Unfortunately I have no more detailed information but the solution above solved the problem. Best regards Andreas
            Hide
            Dan Poltawski added a comment -

            Here is the query plan in pg on a site with only 1,000 rows. Looks costly:

            im=# explain analyze SELECT distinct f1.contextid, f1.component, f1.filearea, f1.itemid
            im-#                 FROM mdl_files f1 left JOIN mdl_files f2
            im-#                     ON f1.contextid = f2.contextid
            im-#                     AND f1.component = f2.component
            im-#                     AND f1.filearea = f2.filearea
            im-#                     AND f1.itemid = f2.itemid
            im-#                     AND f2.filename = '.'
            im-#                     AND f2.filepath = '/'
            im-#                 WHERE (f1.component <> 'user' or f1.filearea <> 'draft')
            im-#                 and f2.id is null;
                                                                                             QUERY PLAN
            -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
             HashAggregate  (cost=552.05..552.06 rows=1 width=35) (actual time=306.669..306.669 rows=0 loops=1)
               ->  Nested Loop Left Join  (cost=0.28..552.04 rows=1 width=35) (actual time=306.668..306.668 rows=0 loops=1)
                     Filter: (f2.id IS NULL)
                     Rows Removed by Filter: 1172
                     ->  Seq Scan on mdl_files f1  (cost=0.00..60.09 rows=1203 width=35) (actual time=0.009..0.292 rows=1172 loops=1)
                           Filter: (((component)::text <> 'user'::text) OR ((filearea)::text <> 'draft'::text))
                           Rows Removed by Filter: 34
                     ->  Index Scan using mdl_file_comfilconite_ix on mdl_files f2  (cost=0.28..0.40 rows=1 width=43) (actual time=0.261..0.261 rows=1 loops=1172)
                           Index Cond: (((f1.component)::text = (component)::text) AND ((f1.filearea)::text = (filearea)::text) AND (f1.contextid = contextid) AND (f1.itemid = itemid))
                           Filter: (((filename)::text = '.'::text) AND ((filepath)::text = '/'::text))
                           Rows Removed by Filter: 898
             Total runtime: 306.729 ms
            

            Show
            Dan Poltawski added a comment - Here is the query plan in pg on a site with only 1,000 rows. Looks costly: im=# explain analyze SELECT distinct f1.contextid, f1.component, f1.filearea, f1.itemid im-# FROM mdl_files f1 left JOIN mdl_files f2 im-# ON f1.contextid = f2.contextid im-# AND f1.component = f2.component im-# AND f1.filearea = f2.filearea im-# AND f1.itemid = f2.itemid im-# AND f2.filename = '.' im-# AND f2.filepath = '/' im-# WHERE (f1.component <> 'user' or f1.filearea <> 'draft') im-# and f2.id is null; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=552.05..552.06 rows=1 width=35) (actual time=306.669..306.669 rows=0 loops=1) -> Nested Loop Left Join (cost=0.28..552.04 rows=1 width=35) (actual time=306.668..306.668 rows=0 loops=1) Filter: (f2.id IS NULL) Rows Removed by Filter: 1172 -> Seq Scan on mdl_files f1 (cost=0.00..60.09 rows=1203 width=35) (actual time=0.009..0.292 rows=1172 loops=1) Filter: (((component)::text <> 'user'::text) OR ((filearea)::text <> 'draft'::text)) Rows Removed by Filter: 34 -> Index Scan using mdl_file_comfilconite_ix on mdl_files f2 (cost=0.28..0.40 rows=1 width=43) (actual time=0.261..0.261 rows=1 loops=1172) Index Cond: (((f1.component)::text = (component)::text) AND ((f1.filearea)::text = (filearea)::text) AND (f1.contextid = contextid) AND (f1.itemid = itemid)) Filter: (((filename)::text = '.'::text) AND ((filepath)::text = '/'::text)) Rows Removed by Filter: 898 Total runtime: 306.729 ms
            Hide
            Dan Poltawski added a comment -

            This query seems to be cheaper (even though its using a subselect).

            Need to do a better job of testing accuracy though

            SELECT f1.contextid, f1.component, f1.filearea, f1.itemid
            FROM mdl_files f1
            WHERE (f1.component <> 'user' or f1.filearea <> 'draft')  
            AND NOT EXISTS
            (SELECT 1 FROM  mdl_files f2
            WHERE f2.contextid = f1.contextid
            AND f2.component = f1.component
            AND  f2.filearea = f1.filearea
            AND f2.filename = '.'
            AND f2.filepath = '/')
            GROUP BY f1.contextid, f1.component, f1.filearea, f1.itemid;
            

                                                                                  QUERY PLAN
            ------------------------------------------------------------------------------------------------------------------------------------------------------
             HashAggregate  (cost=148.34..148.35 rows=1 width=35) (actual time=1.445..1.446 rows=1 loops=1)
               ->  Hash Anti Join  (cost=61.18..148.33 rows=1 width=35) (actual time=0.540..1.441 rows=1 loops=1)
                     Hash Cond: ((f1.contextid = f2.contextid) AND ((f1.component)::text = (f2.component)::text) AND ((f1.filearea)::text = (f2.filearea)::text))
                     ->  Seq Scan on mdl_files f1  (cost=0.00..60.09 rows=1203 width=35) (actual time=0.011..0.369 rows=1171 loops=1)
                           Filter: (((component)::text <> 'user'::text) OR ((filearea)::text <> 'draft'::text))
                           Rows Removed by Filter: 34
                     ->  Hash  (cost=60.09..60.09 rows=62 width=27) (actual time=0.402..0.402 rows=60 loops=1)
                           Buckets: 1024  Batches: 1  Memory Usage: 4kB
                           ->  Seq Scan on mdl_files f2  (cost=0.00..60.09 rows=62 width=27) (actual time=0.006..0.373 rows=60 loops=1)
                                 Filter: (((filename)::text = '.'::text) AND ((filepath)::text = '/'::text))
                                 Rows Removed by Filter: 1145
             Total runtime: 1.495 ms
            (12 rows)
            

            Show
            Dan Poltawski added a comment - This query seems to be cheaper (even though its using a subselect). Need to do a better job of testing accuracy though SELECT f1.contextid, f1.component, f1.filearea, f1.itemid FROM mdl_files f1 WHERE (f1.component <> 'user' or f1.filearea <> 'draft') AND NOT EXISTS (SELECT 1 FROM mdl_files f2 WHERE f2.contextid = f1.contextid AND f2.component = f1.component AND f2.filearea = f1.filearea AND f2.filename = '.' AND f2.filepath = '/') GROUP BY f1.contextid, f1.component, f1.filearea, f1.itemid; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------ HashAggregate (cost=148.34..148.35 rows=1 width=35) (actual time=1.445..1.446 rows=1 loops=1) -> Hash Anti Join (cost=61.18..148.33 rows=1 width=35) (actual time=0.540..1.441 rows=1 loops=1) Hash Cond: ((f1.contextid = f2.contextid) AND ((f1.component)::text = (f2.component)::text) AND ((f1.filearea)::text = (f2.filearea)::text)) -> Seq Scan on mdl_files f1 (cost=0.00..60.09 rows=1203 width=35) (actual time=0.011..0.369 rows=1171 loops=1) Filter: (((component)::text <> 'user'::text) OR ((filearea)::text <> 'draft'::text)) Rows Removed by Filter: 34 -> Hash (cost=60.09..60.09 rows=62 width=27) (actual time=0.402..0.402 rows=60 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 4kB -> Seq Scan on mdl_files f2 (cost=0.00..60.09 rows=62 width=27) (actual time=0.006..0.373 rows=60 loops=1) Filter: (((filename)::text = '.'::text) AND ((filepath)::text = '/'::text)) Rows Removed by Filter: 1145 Total runtime: 1.495 ms (12 rows)
            Hide
            Dan Poltawski added a comment -

            Not so hot on interpreting the mysql explain results, but here is old:

            id select_type table type possible_keys key key_len ref rows Extra
            1 SIMPLE f1 index mdl_file_comfilconite_ix mdl_file_comfilconite_ix 470 NULL 1083 Using where; Using index
            1 SIMPLE f2 ALL mdl_file_comfilconite_ix,mdl_file_con2_ix NULL NULL NULL 1083 Using where; Not exists; Using join buffer (Block Nested Loop)

            And new:

            id select_type table type possible_keys key key_len ref rows Extra
            1 PRIMARY f1 index mdl_file_comfilconite_ix mdl_file_comfilconite_ix 470 NULL 1083 Using where; Using index; Using temporary; Using filesort
            2 DEPENDENT SUBQUERY f2 ALL mdl_file_comfilconite_ix,mdl_file_con2_ix NULL NULL NULL 1083 Using where

            The lack of block nested loop seems better to me, but tbh i'm guessing. I need a bigger dataset.

            Show
            Dan Poltawski added a comment - Not so hot on interpreting the mysql explain results, but here is old: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE f1 index mdl_file_comfilconite_ix mdl_file_comfilconite_ix 470 NULL 1083 Using where; Using index 1 SIMPLE f2 ALL mdl_file_comfilconite_ix,mdl_file_con2_ix NULL NULL NULL 1083 Using where; Not exists; Using join buffer (Block Nested Loop) And new: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY f1 index mdl_file_comfilconite_ix mdl_file_comfilconite_ix 470 NULL 1083 Using where; Using index; Using temporary; Using filesort 2 DEPENDENT SUBQUERY f2 ALL mdl_file_comfilconite_ix,mdl_file_con2_ix NULL NULL NULL 1083 Using where The lack of block nested loop seems better to me, but tbh i'm guessing. I need a bigger dataset.
            Hide
            Dan Poltawski added a comment -

            Ok, i've now created 3.1 million file records in my files table and unfortunately my quer doesn't seem to do much good:

            mysql> SELECT f1.contextid, f1.component, f1.filearea, f1.itemid
                -> FROM mdl_files f1
                -> WHERE (f1.component <> 'user' or f1.filearea <> 'draft')
                -> AND NOT EXISTS
                -> (
                -> SELECT 1 FROM  mdl_files f2
                -> WHERE f2.contextid = f1.contextid
                -> AND f2.component = f1.component
                -> AND  f2.filearea = f1.filearea
                -> AND f2.filename = '.'
                -> AND f2.filepath = '/'
                -> )
                -> GROUP BY f1.contextid, f1.component, f1.filearea, f1.itemid;
            Empty set (30.79 sec)
             
            mysql> SELECT  f1.contextid, f1.component, f1.filearea, f1.itemid
                -> FROM mdl_files f1 left JOIN mdl_files f2
                ->                     ON f1.contextid = f2.contextid
                ->                     AND f1.component = f2.component
                ->                     AND f1.filearea = f2.filearea
                ->                     AND f1.itemid = f2.itemid
                ->                     AND f2.filename = '.'
                ->                     AND f2.filepath = '/'
                ->                 WHERE (f1.component <> 'user' or f1.filearea <> 'draft')
                ->                 and f2.id is null;
            Empty set (29.99 sec)
            

            Show
            Dan Poltawski added a comment - Ok, i've now created 3.1 million file records in my files table and unfortunately my quer doesn't seem to do much good: mysql> SELECT f1.contextid, f1.component, f1.filearea, f1.itemid -> FROM mdl_files f1 -> WHERE (f1.component <> 'user' or f1.filearea <> 'draft') -> AND NOT EXISTS -> ( -> SELECT 1 FROM mdl_files f2 -> WHERE f2.contextid = f1.contextid -> AND f2.component = f1.component -> AND f2.filearea = f1.filearea -> AND f2.filename = '.' -> AND f2.filepath = '/' -> ) -> GROUP BY f1.contextid, f1.component, f1.filearea, f1.itemid; Empty set (30.79 sec)   mysql> SELECT f1.contextid, f1.component, f1.filearea, f1.itemid -> FROM mdl_files f1 left JOIN mdl_files f2 -> ON f1.contextid = f2.contextid -> AND f1.component = f2.component -> AND f1.filearea = f2.filearea -> AND f1.itemid = f2.itemid -> AND f2.filename = '.' -> AND f2.filepath = '/' -> WHERE (f1.component <> 'user' or f1.filearea <> 'draft') -> and f2.id is null; Empty set (29.99 sec)
            Hide
            Dan Poltawski added a comment - - edited

            So at this stage, I am a bit skeptical if there is a more efficient way to write this query.

            I am also pretty that a query per file area would be catastrophic for most sites, so I don't think that this is the way.

            Show
            Dan Poltawski added a comment - - edited So at this stage, I am a bit skeptical if there is a more efficient way to write this query. I am also pretty that a query per file area would be catastrophic for most sites, so I don't think that this is the way.
            Hide
            Dan Poltawski added a comment -

            Hi Kris Stokking. As you guys seem to fall victim to some of our inefficient upgrade steps - I'm just wondering if you guys have run into any problems with this upgrade step on files?

            Show
            Dan Poltawski added a comment - Hi Kris Stokking . As you guys seem to fall victim to some of our inefficient upgrade steps - I'm just wondering if you guys have run into any problems with this upgrade step on files?
            Hide
            Tim Hunt added a comment -

            How about

            SELECT areas.contextid, areas.component, areas.filearea, areas.itemid
            FROM (
                SELECT DISTINCT f1.contextid, f1.component, f1.filearea, f1.itemid
                FROM {files} f1
                WHERE (f1.component <> 'user' or f1.filearea <> 'draft')
            ) areas
            LEFT JOIN {files} f2 ON
                    areas.contextid = f2.contextid
                AND areas.component = f2.component
                AND areas.filearea = f2.filearea
                AND areas.itemid = f2.itemid
                AND f2.filename = '.'
                AND f2.filepath = '/'
            WHERE f2.id is null;
            

            You can also try that with a GROUP BY, rather than a DISTINCT, in the areas subquery.

            I can't see any way that the database can do less work than that, and still get there results we want.

            Actually, yes I can.

            SELECT contextid, component, filearea, itemid,
                MAX(CASE WHEN filename = '.' AND f2.filepath = '/' THEN 1 ELSE 0 END) rootdirexists
            FROM {files}
            WHERE (component <> 'user' OR f1.filearea <> 'draft')
            GROUP BY contextid, component, filearea, itemid
            HAVING rootdirexists = 0
            

            That will do a single scan through the table, I think. Note that you might have to inline the expression for rootdirexists in the HAVING.

            Show
            Tim Hunt added a comment - How about SELECT areas.contextid, areas.component, areas.filearea, areas.itemid FROM ( SELECT DISTINCT f1.contextid, f1.component, f1.filearea, f1.itemid FROM {files} f1 WHERE (f1.component <> 'user' or f1.filearea <> 'draft' ) ) areas LEFT JOIN {files} f2 ON areas.contextid = f2.contextid AND areas.component = f2.component AND areas.filearea = f2.filearea AND areas.itemid = f2.itemid AND f2.filename = '.' AND f2.filepath = '/' WHERE f2.id is null ; You can also try that with a GROUP BY, rather than a DISTINCT, in the areas subquery. I can't see any way that the database can do less work than that, and still get there results we want. Actually, yes I can. SELECT contextid, component, filearea, itemid, MAX(CASE WHEN filename = '.' AND f2.filepath = '/' THEN 1 ELSE 0 END) rootdirexists FROM {files} WHERE (component <> 'user' OR f1.filearea <> 'draft') GROUP BY contextid, component, filearea, itemid HAVING rootdirexists = 0 That will do a single scan through the table, I think. Note that you might have to inline the expression for rootdirexists in the HAVING.
            Hide
            Dan Poltawski added a comment -

            Kudos Tim, thanks! I was trying to think of a way to achieve this with a group by and I think you've got it.

            Now with 4094145 file records in mysql (on my super-fast ssd):

            Original query: 36.83 sec
            Tims variant: 8.63 sec

            Show
            Dan Poltawski added a comment - Kudos Tim, thanks! I was trying to think of a way to achieve this with a group by and I think you've got it. Now with 4094145 file records in mysql (on my super-fast ssd): Original query: 36.83 sec Tims variant: 8.63 sec
            Hide
            Kris Stokking added a comment -

            Hey Dan - I appreciate the callout to scalability issues such as this. The interesting part is that we've already rolled out 2.5.3 to all of our Joule 2 clients with great success - it was actually our fastest and smoothest upgrade to date. I think that just goes to show you that Moodle performance is dependent on an incredible number of variables, and that large data sets should always be considered.

            I will say that this type of issue is an excellent candidate to (optionally) move out of the standard Moodle upgrade process as it is A) Expensive B) Is a cleanup script for an issue that does not affect all sites and C) Nothing in Moodle depends on its execution (other than the bug fix). I dream of a Moodle where admins could have the ability to execute independent cleanup scripts outside of the main upgrade process to minimize the required downtime. Just some food for thought.

            Show
            Kris Stokking added a comment - Hey Dan - I appreciate the callout to scalability issues such as this. The interesting part is that we've already rolled out 2.5.3 to all of our Joule 2 clients with great success - it was actually our fastest and smoothest upgrade to date. I think that just goes to show you that Moodle performance is dependent on an incredible number of variables, and that large data sets should always be considered. I will say that this type of issue is an excellent candidate to (optionally) move out of the standard Moodle upgrade process as it is A) Expensive B) Is a cleanup script for an issue that does not affect all sites and C) Nothing in Moodle depends on its execution (other than the bug fix). I dream of a Moodle where admins could have the ability to execute independent cleanup scripts outside of the main upgrade process to minimize the required downtime. Just some food for thought.
            Hide
            Dan Poltawski added a comment - - edited

            [Edit: Removed needless comments making history hard to follow from my own mistakes]

            Show
            Dan Poltawski added a comment - - edited [Edit: Removed needless comments making history hard to follow from my own mistakes]
            Hide
            Dan Poltawski added a comment - - edited

            [Edit: Removed needless comments making history hard to follow from my own mistakes]

            Show
            Dan Poltawski added a comment - - edited [Edit: Removed needless comments making history hard to follow from my own mistakes]
            Hide
            Dan Poltawski added a comment -

            Well that was a nice wild goose chase. I was predicting crazy things in phpunit reset code, but turned out to be me incorrectly putting a 0 in place of a 1. (Yet testing on the SQL command line with the correct query )

            Show
            Dan Poltawski added a comment - Well that was a nice wild goose chase. I was predicting crazy things in phpunit reset code, but turned out to be me incorrectly putting a 0 in place of a 1. (Yet testing on the SQL command line with the correct query )
            Hide
            Tim Hunt added a comment -

            I just reviewed the code, and it looks good to me. This is not yet waiting for peer review, but you can consider this a peer review if you like.

            Show
            Tim Hunt added a comment - I just reviewed the code, and it looks good to me. This is not yet waiting for peer review, but you can consider this a peer review if you like.
            Hide
            Dan Poltawski added a comment -

            Thanks Tim. I'm running on all the databases before sending to peer review.

            But my question for the peer reviewer would be - do you think these unit tests are sufficient? I am conscious of a complex set of test data becoming hard to understand. So I just went for the simple case.

            Show
            Dan Poltawski added a comment - Thanks Tim. I'm running on all the databases before sending to peer review. But my question for the peer reviewer would be - do you think these unit tests are sufficient? I am conscious of a complex set of test data becoming hard to understand. So I just went for the simple case.
            Hide
            Tim Hunt added a comment -

            The test is enough to show that a DB can parse the query without a fatal error, and returns the right thing in simple cases. I think that is sufficient.

            Show
            Tim Hunt added a comment - The test is enough to show that a DB can parse the query without a fatal error, and returns the right thing in simple cases. I think that is sufficient.
            Hide
            Dan Poltawski added a comment -

            Ok sending for peer review. (It'll be nice if someone else can look at this, since Tim did come up with the query, but if it doesn't get reviewed in the next few days, i'll send it for integration with Tim's review).

            Show
            Dan Poltawski added a comment - Ok sending for peer review. (It'll be nice if someone else can look at this, since Tim did come up with the query, but if it doesn't get reviewed in the next few days, i'll send it for integration with Tim's review).
            Hide
            Petr Skoda added a comment -

            1/ the USER should not be used in upgrade because it can be any user who is logged in or guest, I guess 0 should be ok there (fetching main admin id should work too, but it is less elegant because we should not call APIs from upgrade)
            2/ the $pathhash "xxxxx/".'/.' string construct seems a bit weird

            the rest is ok imo, feel free to submit for integration if you standardise the user id

            Thanks!

            Show
            Petr Skoda added a comment - 1/ the USER should not be used in upgrade because it can be any user who is logged in or guest, I guess 0 should be ok there (fetching main admin id should work too, but it is less elegant because we should not call APIs from upgrade) 2/ the $pathhash "xxxxx/".'/.' string construct seems a bit weird the rest is ok imo, feel free to submit for integration if you standardise the user id Thanks!
            Hide
            Dan Poltawski added a comment -

            Thanks Petr, I added a commit with your suggestions:

            • Use userid 0
            • Tided up the string concoct. I also added an assertion to the unit tests to ensure the pathnamehash is correctly constructed.

            TO INTEGRATOR: feel free to squash this clean up commit, or ignore it if you don't agree with the changes. I initially avoided changing anything which wasn't directly related to the SQL query.

            Show
            Dan Poltawski added a comment - Thanks Petr, I added a commit with your suggestions: Use userid 0 Tided up the string concoct. I also added an assertion to the unit tests to ensure the pathnamehash is correctly constructed. TO INTEGRATOR: feel free to squash this clean up commit, or ignore it if you don't agree with the changes. I initially avoided changing anything which wasn't directly related to the SQL query.
            Hide
            Andreas Grabs added a comment -

            Hi, thank you very much for solving this issue!
            Best regards Andreas

            Show
            Andreas Grabs added a comment - Hi, thank you very much for solving this issue! Best regards Andreas
            Hide
            CiBoT added a comment -

            Moving this issue to current integration cycle, will be reviewed soon. Thanks for the hard work!

            Show
            CiBoT added a comment - Moving this issue to current integration cycle, will be reviewed soon. Thanks for the hard work!
            Hide
            Sam Hemelryk added a comment -

            Thanks Dan - this has been integrated now

            Show
            Sam Hemelryk added a comment - Thanks Dan - this has been integrated now
            Hide
            Michael de Raadt added a comment - - edited

            So far I have run unit tests on Oracle, PostgreSQL and MSSQL with all three branches (2.5, 2.6 and master).

            I've started tests running on MySQL (on each branch simultaneously). The have been running for about an hour and have reached about 15%. This should be complete by tomorrow morning.

            Show
            Michael de Raadt added a comment - - edited So far I have run unit tests on Oracle, PostgreSQL and MSSQL with all three branches (2.5, 2.6 and master). I've started tests running on MySQL (on each branch simultaneously). The have been running for about an hour and have reached about 15%. This should be complete by tomorrow morning.
            Hide
            Michael de Raadt added a comment -

            Test result: Success!

            Tests passing on all supported versions under all five DB drivers.

            Show
            Michael de Raadt added a comment - Test result: Success! Tests passing on all supported versions under all five DB drivers.
            Hide
            Eloy Lafuente (stronk7) added a comment -

            Fetch your remotes, prune them,
            clean your integrated branches and say "Ahem".

            Rebase your ongoing stuff, keep conflicts away
            don't forget to test the code and we'll love you again.

            Thanks, closing!

            Show
            Eloy Lafuente (stronk7) added a comment - Fetch your remotes, prune them, clean your integrated branches and say "Ahem". Rebase your ongoing stuff, keep conflicts away don't forget to test the code and we'll love you again. Thanks, closing!

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                8 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:

                  Agile