Moodle
  1. Moodle
  2. MDL-8163

Record cache cannot be enabled from admin settings

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 1.8
    • Fix Version/s: 1.8
    • Component/s: Administration
    • Labels:
      None
    • Affected Branches:
      MOODLE_18_STABLE
    • Fixed Branches:
      MOODLE_18_STABLE
    • Rank:
      29311

      Description

      (This may all be issues you already know about but I thought I should get it filed to make sure.)

      Prior to the record cache changes, it was enabled by $CFG->enablerecordcache which controlled the maximum number of records stored in the cache (0 = off, 50 = default).

      Changes to support external caches seem to mean it's controlled now by $CFG->rcache which must be set to 'internal' to enable it.

      Several issues to do with this config option:

      1) The admin setting for enablerecordcache is still there, even though it now has no effect!
      2) There is no admin setting for rcache.
      3) Upgrade from earlier versions did not carry through a enablerecordcache>0 setting to become rcache=internal.

      4) I could very well be mistaken on this but I couldn't see anywhere to set a limit on the memory cache size when using internal. This was added to the previous record cache as a safety measure - even if you request every record in mdl_user, it would still never cache more than 50, so there was no possibility of memory problems. However maybe I just didn't notice the code.

        Activity

        Hide
        Martín Langhoff added a comment -

        Hi Sam! The commit messages talk a bit about it... I was expecting that you'd play a bit with it on your sample data and posted some feedback on how it works. In the meantime, I've left the config part untouched – I need to finish it, but was waiting for news from your side

        In terms of limiting the cache side, your code didn't keep good tally of the records so it wasn't working as you expected. I dropped that part momentarily. In any case, it's better to geep a FIFO cache instead.

        Show
        Martín Langhoff added a comment - Hi Sam! The commit messages talk a bit about it... I was expecting that you'd play a bit with it on your sample data and posted some feedback on how it works. In the meantime, I've left the config part untouched – I need to finish it, but was waiting for news from your side In terms of limiting the cache side, your code didn't keep good tally of the records so it wasn't working as you expected. I dropped that part momentarily. In any case, it's better to geep a FIFO cache instead.
        Hide
        Martín Langhoff added a comment -

        Just committed some fixes

        Show
        Martín Langhoff added a comment - Just committed some fixes
        Hide
        Sam Marshall added a comment -

        I haven't had a chance to check this yet but I'm sure it's all good. I'd still want to see a memory size/#items limit as not having that could really cause unexpected problems, but in practice it doesn't seem to break much. I guess that should be a separate bug really.

        By the way, it should actually have been better performance the way I had it (caching first 50 requested records) than using FIFO (cache last 50 requested records). The reason is that 'key' records such as course, course_modules entries, roles entries for the contexts currently used, etc, tend to be requested before doing much else on the page - and if there are many requests, it's probably some kind of repetitive process looping through things which will only be accessed once each. Obviously the best algorithm would use some combination of how recently it had been accessed along with how many times it was accessed, when deciding which records to chuck out. But it's probably not necessary to go to that extent since the internal cache is only a per-request cache and doesn't persist in memory.

        Show
        Sam Marshall added a comment - I haven't had a chance to check this yet but I'm sure it's all good. I'd still want to see a memory size/#items limit as not having that could really cause unexpected problems, but in practice it doesn't seem to break much. I guess that should be a separate bug really. By the way, it should actually have been better performance the way I had it (caching first 50 requested records) than using FIFO (cache last 50 requested records). The reason is that 'key' records such as course, course_modules entries, roles entries for the contexts currently used, etc, tend to be requested before doing much else on the page - and if there are many requests, it's probably some kind of repetitive process looping through things which will only be accessed once each. Obviously the best algorithm would use some combination of how recently it had been accessed along with how many times it was accessed, when deciding which records to chuck out. But it's probably not necessary to go to that extent since the internal cache is only a per-request cache and doesn't persist in memory.
        Hide
        Martín Langhoff added a comment -

        Call me cheeky: I added the config option for the records limit, but didn't implement the code

        In terms of FIFO vs first-in, it's probably split, as a stupid loop that gets the same stuff repeatedly is the scenario we want to "fix" and if it doesn't fit under the limit, we are back to square 1, but with a full cache of uninteresting entries. The early important requests we know about (sitecourse?), and can be cached separately in global vars anyway.

        For the cache to fight bad thrashy pages, I think FIFO is better and an LRU scheme is best overall, but I'm too lazy and a quick google didn't yield any LRU/PseudoLRU code in PHP. Thinking about it, I think it's impossible to make a lightweight LRU in PHP. The mem footprint of the tracking vars would suck.

        Show
        Martín Langhoff added a comment - Call me cheeky: I added the config option for the records limit, but didn't implement the code In terms of FIFO vs first-in, it's probably split, as a stupid loop that gets the same stuff repeatedly is the scenario we want to "fix" and if it doesn't fit under the limit, we are back to square 1, but with a full cache of uninteresting entries. The early important requests we know about (sitecourse?), and can be cached separately in global vars anyway. For the cache to fight bad thrashy pages, I think FIFO is better and an LRU scheme is best overall, but I'm too lazy and a quick google didn't yield any LRU/PseudoLRU code in PHP. Thinking about it, I think it's impossible to make a lightweight LRU in PHP. The mem footprint of the tracking vars would suck.

          People

          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: