Uploaded image for project: 'Moodle'
  1. Moodle
  2. MDL-42071

Support for split memcache/memcached stores

    Details

    • Testing Instructions:
      Hide

      This is on a Mac, but should be similar in Linux. Windows is unknown to me.

      Requirements:

      • memcached (the service)
      • memcached (PECL)
      • memcache (PECL)
      • phpunit setup

      You will need two instances of memcached running. On my mac (linux should be the same) you can start two by doing:
      Open two shells, and run one command in each:

      memcached -p 11211

      memcached -p 11210

      In another shell, run these commands, and confirm the results are all 0:

      echo stats | nc 127.0.0.1 11211 | grep cmd_

      echo stats | nc 127.0.0.1 11210 | grep cmd_

      Add this to your config.php:

      define('TEST_CACHESTORE_MEMCACHED_TESTSERVERS', '127.0.0.1:11211
      127.0.0.1:11210');
      

      Open a shell, and navigate to the dirroot of the site you a testing with.
      If you haven’t done so yet, run:

      php admin/tool/phpunit/cli/init.php

      Run the memcached test_clustered unit test:

      vendor/bin/phpunit --filter test_clustered cache/stores/memcached/tests/memcached_test.php

      Run these commands:

      echo stats | nc 127.0.0.1 11211 | grep cmd_

      echo stats | nc 127.0.0.1 11210 | grep cmd_

      We expect that get, set, and flush has moved up on both servers. My results are:

      merrill ~$ echo stats | nc 127.0.0.1 11211 | grep cmd_
      STAT cmd_get 132
      STAT cmd_set 42
      STAT cmd_flush 1
      STAT cmd_touch 0
      merrill ~$ echo stats | nc 127.0.0.1 11210 | grep cmd_
      STAT cmd_get 66
      STAT cmd_set 41
      STAT cmd_flush 1
      STAT cmd_touch 0
      

      Run the full memcached unit tests and confirm no failures:

      vendor/bin/phpunit cachestore_memcached_test cache/stores/memcached/tests/memcached_test.php

      1. Turn on Performance info under Development > Debugging
      2. Create a new Memcached caching instance
      3. Set the name to Memcached Cluster, Servers to ‘localhost:11211’, Enable clustering, and Set servers to:

        localhost:11211
        localhost:11210

      4. Save with all the remaining default settings.
      5. Add the Memcached Cluster store to the Language string cache.
      6. Reset the two memcached services (ctrl+C, then rerun the command).
      7. Navigate to Home. Once there, reload the page.
      8. We expect to see cachestore_memcachedcluster under core/string, with no misses or sets (or very few).
      9. Purge all caches.
      10. Rerun the two stats commands. We expect the server on port 11211 to have many gets and sets. The server on port 11210 should have many sets, but no gets. Both will have flushes.

      Repeat everything above for the memcache store.
      Define in config.php:

      define('TEST_CACHESTORE_MEMCACHE_TESTSERVERS', '127.0.0.1:11211
      127.0.0.1:11210');
      

      And the php unit tests are:

      vendor/bin/phpunit --filter test_clustered cache/stores/memcache/tests/memcache_test.php

      and

      vendor/bin/phpunit cachestore_memcache_test cache/stores/memcache/tests/memcache_test.php

      Show
      This is on a Mac, but should be similar in Linux. Windows is unknown to me. Requirements: memcached (the service) memcached (PECL) memcache (PECL) phpunit setup You will need two instances of memcached running. On my mac (linux should be the same) you can start two by doing: Open two shells, and run one command in each: memcached -p 11211 memcached -p 11210 In another shell, run these commands, and confirm the results are all 0: echo stats | nc 127.0.0.1 11211 | grep cmd_ echo stats | nc 127.0.0.1 11210 | grep cmd_ Add this to your config.php: define('TEST_CACHESTORE_MEMCACHED_TESTSERVERS', '127.0.0.1:11211 127.0.0.1:11210'); Open a shell, and navigate to the dirroot of the site you a testing with. If you haven’t done so yet, run: php admin/tool/phpunit/cli/init.php Run the memcached test_clustered unit test: vendor/bin/phpunit --filter test_clustered cache/stores/memcached/tests/memcached_test.php Run these commands: echo stats | nc 127.0.0.1 11211 | grep cmd_ echo stats | nc 127.0.0.1 11210 | grep cmd_ We expect that get, set, and flush has moved up on both servers. My results are: merrill ~$ echo stats | nc 127.0.0.1 11211 | grep cmd_ STAT cmd_get 132 STAT cmd_set 42 STAT cmd_flush 1 STAT cmd_touch 0 merrill ~$ echo stats | nc 127.0.0.1 11210 | grep cmd_ STAT cmd_get 66 STAT cmd_set 41 STAT cmd_flush 1 STAT cmd_touch 0 Run the full memcached unit tests and confirm no failures: vendor/bin/phpunit cachestore_memcached_test cache/stores/memcached/tests/memcached_test.php Turn on Performance info under Development > Debugging Create a new Memcached caching instance Set the name to Memcached Cluster, Servers to ‘localhost:11211’, Enable clustering, and Set servers to: localhost:11211 localhost:11210 Save with all the remaining default settings. Add the Memcached Cluster store to the Language string cache. Reset the two memcached services (ctrl+C, then rerun the command). Navigate to Home. Once there, reload the page. We expect to see cachestore_memcachedcluster under core/string, with no misses or sets (or very few). Purge all caches. Rerun the two stats commands. We expect the server on port 11211 to have many gets and sets. The server on port 11210 should have many sets, but no gets. Both will have flushes. Repeat everything above for the memcache store. Define in config.php: define('TEST_CACHESTORE_MEMCACHE_TESTSERVERS', '127.0.0.1:11211 127.0.0.1:11210'); And the php unit tests are: vendor/bin/phpunit --filter test_clustered cache/stores/memcache/tests/memcache_test.php and vendor/bin/phpunit cachestore_memcache_test cache/stores/memcache/tests/memcache_test.php
    • Affected Branches:
      MOODLE_25_STABLE, MOODLE_26_STABLE
    • Fixed Branches:
      MOODLE_28_STABLE
    • Pull Master Branch:
      MDL-42071_MK3

      Description

      In load balanced environments, it can be of significant benefit to have memcached servers running on each front end server, as it removes network time for all access of the cache.

      Most caches must (or at least really should be) shared, and running caches that point to localhost (ie on each front end) can/will get out of sync, as there is no modification or invalidation between machines.

      Some caches have a very low set/hit ratio, like Lang Strings and DB Schema.

      With that in mind, I propose a two cache stores (memcachecluster and memcachedcluster) that fetch from one set of servers (typically localhost) and updates a different set of servers (typically the full hostname of each front end).

      Because of low set/hit ratio, the time lost in updating multiple servers is more than made up by the gain of fetching locally.

        Gliffy Diagrams

          Attachments

            Issue Links

              Activity

              Hide
              emerrill Eric Merrill added a comment -

              Adding Sam (because we talked in the dev chat) and Ankit (because he is the caching lead) as watcher. Please give feedback (or add who you think should look in on this).

              Show
              emerrill Eric Merrill added a comment - Adding Sam (because we talked in the dev chat) and Ankit (because he is the caching lead) as watcher. Please give feedback (or add who you think should look in on this).
              Hide
              quen Sam Marshall added a comment -

              Change looks straightforward! +1 from me.

              A complete implementation of this feature would be rather more complex - in particular, I think it would be necessary to define for each cache whether it is safe to do this, and then allow this setting/this type of cache only for caches that could safely do it. There is some infrastructure around allowing certain backend store types based on guarantees they offer/don't, already in the system but I'm not sure it can handle simply different settings within the same type of cache.

              IF that is a problem then it might be good to create this as a subclass rather than just modify the existing one, i.e. 'Local shared memcache' instance where the hostname is hardwired to localhost, and you have to fill in the 'servers to purge' list.

              Basically the reason this would be safe (or not safe) depends on whether the actual data needs to be shared or it's okay to only share the 'purge' operation and build up actual data locally. In the case of strings, database tables, and anything else based pretty much entirely on contents of moodle codebase, then there is no need for changes to the data to be shared, only purge on upgrade. So it would be nice if there were some kind of marker in the MUC cache type entries (e.g. for strings) to indicate that they're safe for this kind of caching.

              As doing this kind of 'full' change would probably be complex, I'd be in favour of applying this kind of small simple change beforehand anyhow, just a question if you want to move at all toward the complex one (e.g. doing the subclass I mentioned).

              Show
              quen Sam Marshall added a comment - Change looks straightforward! +1 from me. A complete implementation of this feature would be rather more complex - in particular, I think it would be necessary to define for each cache whether it is safe to do this, and then allow this setting/this type of cache only for caches that could safely do it. There is some infrastructure around allowing certain backend store types based on guarantees they offer/don't, already in the system but I'm not sure it can handle simply different settings within the same type of cache. IF that is a problem then it might be good to create this as a subclass rather than just modify the existing one, i.e. 'Local shared memcache' instance where the hostname is hardwired to localhost, and you have to fill in the 'servers to purge' list. Basically the reason this would be safe (or not safe) depends on whether the actual data needs to be shared or it's okay to only share the 'purge' operation and build up actual data locally. In the case of strings, database tables, and anything else based pretty much entirely on contents of moodle codebase, then there is no need for changes to the data to be shared, only purge on upgrade. So it would be nice if there were some kind of marker in the MUC cache type entries (e.g. for strings) to indicate that they're safe for this kind of caching. As doing this kind of 'full' change would probably be complex, I'd be in favour of applying this kind of small simple change beforehand anyhow, just a question if you want to move at all toward the complex one (e.g. doing the subclass I mentioned).
              Hide
              emerrill Eric Merrill added a comment -

              Yeah, I definitely think that the concept could be expanded - although that would be a much larger project (probably morphing MDL-39837 or similar into a meta).

              I think that this is still pretty safe to land in it's current state. We are really only adding the ability to purge multiple server (which doesn't really cause harm) - yes, it opens up a advanced technique, but doesn't really spell it out (somebody would already have to know that they wanted to do it).

              Really, the safest thing to do would probably be to explain in the docs how it can be used and what caches are safe to split. I would note, that on different LB setups, that may be a different list - our load balance has session persistance, so every connection from a client will go to the same back end server, meaning we could store session related caches, but some LB setups don't make that guarantee.

              Show
              emerrill Eric Merrill added a comment - Yeah, I definitely think that the concept could be expanded - although that would be a much larger project (probably morphing MDL-39837 or similar into a meta). I think that this is still pretty safe to land in it's current state. We are really only adding the ability to purge multiple server (which doesn't really cause harm) - yes, it opens up a advanced technique, but doesn't really spell it out (somebody would already have to know that they wanted to do it). Really, the safest thing to do would probably be to explain in the docs how it can be used and what caches are safe to split. I would note, that on different LB setups, that may be a different list - our load balance has session persistance, so every connection from a client will go to the same back end server, meaning we could store session related caches, but some LB setups don't make that guarantee.
              Hide
              poltawski Dan Poltawski added a comment -

              Hi Eric,

              Well, i'm not entirely clear on the setup which requires this from the discussion, but I will ignore that for now.

              I don't find the way that the port handling stuff works is particularly intuitive. But I notice that you are copying how we are handling it elsewhere.

              But - it seems like you are expecting a weight to be added (limiting the explode to 3 rather than 2) like the other parts of code. So that doesn't seem right to me?

              Show
              poltawski Dan Poltawski added a comment - Hi Eric, Well, i'm not entirely clear on the setup which requires this from the discussion, but I will ignore that for now. I don't find the way that the port handling stuff works is particularly intuitive. But I notice that you are copying how we are handling it elsewhere. But - it seems like you are expecting a weight to be added (limiting the explode to 3 rather than 2) like the other parts of code. So that doesn't seem right to me?
              Hide
              emerrill Eric Merrill added a comment -

              The use case it - If you have a load balanced Moodle install, some caches can safely be run independently on each front end, like strings, and db schema, as they change very infrequently. These local memcached instances are much fast than over the network. This can be implemented by running memcached (daemon) on each head end, then setting up a cache store with localhost as the server.
              The problem with this setup, is during upgrades (or if you need to purge the cache otherwise), you can only do it by manually resetting each heads memcached server. This patch allows Moodle to properly purge all servers that are using the cache store.

              I allowed 3 params in case somebody followed the same format as the main servers section. I changed the code to throw a DEBUG_DEVELOPER if there are more than 2 parameters for a server listed.

              Show
              emerrill Eric Merrill added a comment - The use case it - If you have a load balanced Moodle install, some caches can safely be run independently on each front end, like strings, and db schema, as they change very infrequently. These local memcached instances are much fast than over the network. This can be implemented by running memcached (daemon) on each head end, then setting up a cache store with localhost as the server. The problem with this setup, is during upgrades (or if you need to purge the cache otherwise), you can only do it by manually resetting each heads memcached server. This patch allows Moodle to properly purge all servers that are using the cache store. I allowed 3 params in case somebody followed the same format as the main servers section. I changed the code to throw a DEBUG_DEVELOPER if there are more than 2 parameters for a server listed.
              Hide
              emerrill Eric Merrill added a comment -

              Dan, I also requested peer review from you on MDL-42158, since it is this identical issue, but moved to memcache. The patch is literally identical, just had to modify the strings (and remove the option passing, cause memcache doesn't do that).

              Show
              emerrill Eric Merrill added a comment - Dan, I also requested peer review from you on MDL-42158 , since it is this identical issue, but moved to memcache. The patch is literally identical, just had to modify the strings (and remove the option passing, cause memcache doesn't do that).
              Hide
              quen Sam Marshall added a comment -

              Another quick review from me (was looking to see if I could submit it - I think not quite), I like this code, just trivia:

              1. I prefer space around dot operator

              https://github.com/merrill-oakland/moodle/compare/d45e65ccadbf4097a970655941ac1c5cae17f26d...MDL-42071#diff-a09c70c5ae39d0e8c649438241df2fd6L327

              2. Typo 'persistant' (it's persistent).

              3. The debugging() calls should not be DEBUG_DEVELOPER as this is an error that users could make and not an error caused only by developers screwing up; probably can just use the default value.

              4. Line 455 - my preference is that all strings should use single quotes (unless required for escapes like the below one).

              5. Line 453 $servers = array(); - what's the point of this? You don't seem to use it? If not a mistake, it maybe makes sense with more context but if so, imo needs a comment to explain why.

              6. Trivia: You've used $purgeserver in one place and $server in the other place as variable names for the same thing, would it be better to make these consistent?

              Show
              quen Sam Marshall added a comment - Another quick review from me (was looking to see if I could submit it - I think not quite), I like this code, just trivia: 1. I prefer space around dot operator https://github.com/merrill-oakland/moodle/compare/d45e65ccadbf4097a970655941ac1c5cae17f26d...MDL-42071#diff-a09c70c5ae39d0e8c649438241df2fd6L327 2. Typo 'persistant' (it's persistent). 3. The debugging() calls should not be DEBUG_DEVELOPER as this is an error that users could make and not an error caused only by developers screwing up; probably can just use the default value. 4. Line 455 - my preference is that all strings should use single quotes (unless required for escapes like the below one). 5. Line 453 $servers = array(); - what's the point of this? You don't seem to use it? If not a mistake, it maybe makes sense with more context but if so, imo needs a comment to explain why. 6. Trivia: You've used $purgeserver in one place and $server in the other place as variable names for the same thing, would it be better to make these consistent?
              Hide
              quen Sam Marshall added a comment -

              A note from me about the use case: We definitely need this feature at the OU. Using local memcaches makes a really big difference to performance, but at present we have to purge the memcaches manually whenever we do a server upgrade - not a good look. Especially the time when they forgot and weird things happened!

              For large multi-server sites that aren't already using this structure, it can be quite a significant performance increase (compared to putting everything on a shared memcache server across the network). Obviously depends on your network latency and so on.

              Basically, this goes back to our 2.4 launch, but there was a major panic and the upgrade nearly got pulled for performance reasons - the local memcache structure is one of the key things that saved it.

              Show
              quen Sam Marshall added a comment - A note from me about the use case: We definitely need this feature at the OU. Using local memcaches makes a really big difference to performance, but at present we have to purge the memcaches manually whenever we do a server upgrade - not a good look. Especially the time when they forgot and weird things happened! For large multi-server sites that aren't already using this structure, it can be quite a significant performance increase (compared to putting everything on a shared memcache server across the network). Obviously depends on your network latency and so on. Basically, this goes back to our 2.4 launch, but there was a major panic and the upgrade nearly got pulled for performance reasons - the local memcache structure is one of the key things that saved it.
              Hide
              emerrill Eric Merrill added a comment -

              Thanks Sam:

              1. Done (although the docs specifically say developer preference )
              2. Done
              3. Done
              4. Done
              5. It should have been $purgeservers. Fixed
              6. I've changed it so all locations use $purgeserver

              I'm porting these over to MDL-42158 right now.

              Show
              emerrill Eric Merrill added a comment - Thanks Sam: 1. Done (although the docs specifically say developer preference ) 2. Done 3. Done 4. Done 5. It should have been $purgeservers. Fixed 6. I've changed it so all locations use $purgeserver I'm porting these over to MDL-42158 right now.
              Hide
              emerrill Eric Merrill added a comment -

              Ok, I also update MDL-42158 with the same updates, given they are identical...

              Show
              emerrill Eric Merrill added a comment - Ok, I also update MDL-42158 with the same updates, given they are identical...
              Hide
              quen Sam Marshall added a comment -

              Thanks Eric, I think this is good for integration now, hopefully others agree

              Show
              quen Sam Marshall added a comment - Thanks Eric, I think this is good for integration now, hopefully others agree
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment -

              The main moodle.git repository has just been updated with latest weekly modifications. You may wish to rebase your PULL branches to simplify history and avoid any possible merge conflicts. This would also make integrator's life easier next week.

              TIA and ciao

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - The main moodle.git repository has just been updated with latest weekly modifications. You may wish to rebase your PULL branches to simplify history and avoid any possible merge conflicts. This would also make integrator's life easier next week. TIA and ciao
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment -

              While I understand why this is being done…. I'm not sure if cache stores implementation(s) are the place for this.

              Wouldn't be better, and simpler, to have a "admin/upgrade_nodb.php" script, that gets properly implemented and documented, in charge to perform any "local" tasks (invalidate caches, empty local cachedir… whatever) avoiding the normal upgrade.php tasks that will performed only once by 1 server in the cluster?

              That way, everybody would know that after a code upgrade, at least one of admin/upgrade.php or admin/upgrade_nodb.php scripts must be executed. More yet, we could invent some config.php CFG variable to define the "type" of server ("main", "nodb") or, more yet, the list of tasks to perform. And make all them execute the same upgrade.php but behaving differently based on those variables.

              it could be easily a tool too, I think.

              Said that, I think your implementation looks correct, but I'm going to ask for more opinions because I'm not sure that's the place to implement the solution.

              Thanks and ciao

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - While I understand why this is being done…. I'm not sure if cache stores implementation(s) are the place for this. Wouldn't be better, and simpler, to have a "admin/upgrade_nodb.php" script, that gets properly implemented and documented, in charge to perform any "local" tasks (invalidate caches, empty local cachedir… whatever) avoiding the normal upgrade.php tasks that will performed only once by 1 server in the cluster? That way, everybody would know that after a code upgrade, at least one of admin/upgrade.php or admin/upgrade_nodb.php scripts must be executed. More yet, we could invent some config.php CFG variable to define the "type" of server ("main", "nodb") or, more yet, the list of tasks to perform. And make all them execute the same upgrade.php but behaving differently based on those variables. it could be easily a tool too, I think. Said that, I think your implementation looks correct, but I'm going to ask for more opinions because I'm not sure that's the place to implement the solution. Thanks and ciao
              Hide
              emerrill Eric Merrill added a comment -

              I would point out is that the use case isn't only during upgrades. This behavior is needed when doing other things, like changing lang strings our pushing out patches (sometimes we have to live push out things like javascript patches, no version change - so no upgrade, and we would want one anyways). I think that purging across all servers when you hit 'purge all caches' would be the desired behavior in general.

              While we are thinking about the general topic, I had a separate thought listed here, but this would still be different from the purging case above.

              What about having a setup that all cache modifications (set, delete, purge) hit every server, while gets only hit the local? This would obviously only be useful for stores that have a very low hit-set, but that need tighter data guarantee - $CFG is what made me think of this. I don't know that there are any cache uses in Moodle that the hit-set ratio would make it benifitial, but it was just something that came to mind.

              Show
              emerrill Eric Merrill added a comment - I would point out is that the use case isn't only during upgrades. This behavior is needed when doing other things, like changing lang strings our pushing out patches (sometimes we have to live push out things like javascript patches, no version change - so no upgrade, and we would want one anyways). I think that purging across all servers when you hit 'purge all caches' would be the desired behavior in general. While we are thinking about the general topic, I had a separate thought listed here, but this would still be different from the purging case above. What about having a setup that all cache modifications (set, delete, purge) hit every server, while gets only hit the local? This would obviously only be useful for stores that have a very low hit-set, but that need tighter data guarantee - $CFG is what made me think of this. I don't know that there are any cache uses in Moodle that the hit-set ratio would make it benifitial, but it was just something that came to mind.
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment -

              The most I think about this, the most I see clearly that this should NOT be anything executed by a particular store in the "main" server at all.

              If there are some caches that really provide a high benefit being run locally (versus shared, that is the only "supported" way)… we should think about a separate solution, using events and remote calls and/or a custom plugin in charge of doing it when those events happen. Or reducing it to the simplest form, a custom script to be run each time "something" happen, if we cannot rely on events…

              But 100% separated from current store implementations. This patch does a good thing (assuming you've controlled the invalidation of those caches), but from a wrong place.

              I know that "cached must all be shared" is really frustrating, but that's the way they are. So any advance in this "pseudo cluster control" cannot be accepted by directly implementing it in current stores, but by some "delegated" solution, ideally automated (events, hooks, remote calls…) or manual (a separated script).

              So I'm tending to -1 this, sorry. More opinions are welcome...

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - The most I think about this, the most I see clearly that this should NOT be anything executed by a particular store in the "main" server at all. If there are some caches that really provide a high benefit being run locally (versus shared, that is the only "supported" way)… we should think about a separate solution, using events and remote calls and/or a custom plugin in charge of doing it when those events happen. Or reducing it to the simplest form, a custom script to be run each time "something" happen, if we cannot rely on events… But 100% separated from current store implementations. This patch does a good thing (assuming you've controlled the invalidation of those caches), but from a wrong place. I know that "cached must all be shared" is really frustrating, but that's the way they are. So any advance in this "pseudo cluster control" cannot be accepted by directly implementing it in current stores, but by some "delegated" solution, ideally automated (events, hooks, remote calls…) or manual (a separated script). So I'm tending to -1 this, sorry. More opinions are welcome...
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment - - edited

              About having some store writing everywhere and reading only locally… well, that's an idea worth considering. Surely it can be achieved simply by being able to mark a given store in a node as read-only.

              And then, a secondary, shared store for the whole cluster, working normally (r/w). Still, the read-only instances would need that custom invalidation (ah, no because the main server would invalidate all them). So we are only missing the "read-only" ability for this to be achievable.

              Or, if custom behaviors are expected… why not simply create a new memcached4cluster store? There you should be able to define exactly whatever you want (multiple/spread invalidations, read only, local and shared instances…). Sounds implementable. And if we see that the read-only feature is good, we could extend it to all stores.

              Ciao

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - - edited About having some store writing everywhere and reading only locally… well, that's an idea worth considering. Surely it can be achieved simply by being able to mark a given store in a node as read-only. And then, a secondary, shared store for the whole cluster, working normally (r/w). Still, the read-only instances would need that custom invalidation (ah, no because the main server would invalidate all them). So we are only missing the "read-only" ability for this to be achievable. Or, if custom behaviors are expected… why not simply create a new memcached4cluster store? There you should be able to define exactly whatever you want (multiple/spread invalidations, read only, local and shared instances…). Sounds implementable. And if we see that the read-only feature is good, we could extend it to all stores. Ciao
              Hide
              emerrill Eric Merrill added a comment -

              After the dev chat, I thought a bit more.

              I'm whipping up a proof of concept memcache store that fetches from one list of servers, but sets/deletes/purges from another list. You get shared caches, with very fast gets, but at the cost of sets. I'm going to do some benchmarking to see if things link strings and the db benefit from this. I should be able to report back today/tomorrow with results.

              Show
              emerrill Eric Merrill added a comment - After the dev chat, I thought a bit more. I'm whipping up a proof of concept memcache store that fetches from one list of servers, but sets/deletes/purges from another list. You get shared caches, with very fast gets, but at the cost of sets. I'm going to do some benchmarking to see if things link strings and the db benefit from this. I should be able to report back today/tomorrow with results.
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment -

              (super, thanks!)

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - (super, thanks!)
              Hide
              quen Sam Marshall added a comment -

              The new approach sounds like a great idea that might solve the problem generically.

              By the way, in our load testing we found truly enormous performance gains from storing caches locally, particularly the string cache. A quick look at a small Moodle course view page (2.6 dev) seems to indicate that it makes about 400 requests from filestore cache at default settings, nearly all of which are for strings. If that's actually true, then, in our network, ping between two servers seems to be about 0.5ms, so assuming no other delays and assuming only one round-trip per request, that's a massive 200ms just to read strings from a shared cache! It's no wonder that MUC made performance worse until we did our current, hacky solution for local memcache.

              Show
              quen Sam Marshall added a comment - The new approach sounds like a great idea that might solve the problem generically. By the way, in our load testing we found truly enormous performance gains from storing caches locally, particularly the string cache. A quick look at a small Moodle course view page (2.6 dev) seems to indicate that it makes about 400 requests from filestore cache at default settings, nearly all of which are for strings. If that's actually true, then, in our network, ping between two servers seems to be about 0.5ms, so assuming no other delays and assuming only one round-trip per request, that's a massive 200ms just to read strings from a shared cache! It's no wonder that MUC made performance worse until we did our current, hacky solution for local memcache.
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment -

              FYI: I'm moving this out from current integration, let's see where/how it ends. Many thanks guys!

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - FYI: I'm moving this out from current integration, let's see where/how it ends. Many thanks guys!
              Hide
              cibot CiBoT added a comment -

              Moving this reopened issue out from current integration. Please, re-submit it for integration once ready.

              Show
              cibot CiBoT added a comment - Moving this reopened issue out from current integration. Please, re-submit it for integration once ready.
              Hide
              emerrill Eric Merrill added a comment -

              I've created a pair of new plugins (memcache and memcached):
              https://github.com/merrill-oakland/moodle/compare/f8eff10319e1bd67e10634e79002b850702946c6...MDL-42071_getset_memcache
              https://github.com/merrill-oakland/moodle/compare/f8eff10319e1bd67e10634e79002b850702946c6...MDL-42071_getset_memcached

              They are subclassed off of their respective plugins, and are pretty tightly integrated (as much as possible is done by the parent).

              The basics:
              In the config, you have Fetch Servers and Set Servers. Set servers doesn't use weights and MUST contain all the servers in fetch. I don't just add the Fetch servers to Set because they are likely going to be different hostnames for the same actual memcached server, and that would cause us to unnecessarily double modify of some servers.

              Gets are executed off of the Fetch Server pool, while set/delete/purge is executed against each server listed in Set Servers list.

              This isn't quite all done - still need to do unit tests, and a few cleanup type items, but it should be all functional - I'll be doing some performance testing later today to try and benchmark the change.

              I would note that it is possible for the caches to get out of sync if there are network connectivity issues (this is also possible for the current memcache/memcached stores). I could make it so that the cache is non-functional if any server is unavailable, but I think the downsides to that are bigger than the top side - most clustering environments are unlikely to have intermittent network connectivity failures on any kind of a regular basis. I would just document the possible failure modes (this is already a pretty advanced use case).

              Show
              emerrill Eric Merrill added a comment - I've created a pair of new plugins (memcache and memcached): https://github.com/merrill-oakland/moodle/compare/f8eff10319e1bd67e10634e79002b850702946c6...MDL-42071_getset_memcache https://github.com/merrill-oakland/moodle/compare/f8eff10319e1bd67e10634e79002b850702946c6...MDL-42071_getset_memcached They are subclassed off of their respective plugins, and are pretty tightly integrated (as much as possible is done by the parent). The basics: In the config, you have Fetch Servers and Set Servers. Set servers doesn't use weights and MUST contain all the servers in fetch. I don't just add the Fetch servers to Set because they are likely going to be different hostnames for the same actual memcached server, and that would cause us to unnecessarily double modify of some servers. Gets are executed off of the Fetch Server pool, while set/delete/purge is executed against each server listed in Set Servers list. This isn't quite all done - still need to do unit tests, and a few cleanup type items, but it should be all functional - I'll be doing some performance testing later today to try and benchmark the change. I would note that it is possible for the caches to get out of sync if there are network connectivity issues (this is also possible for the current memcache/memcached stores). I could make it so that the cache is non-functional if any server is unavailable, but I think the downsides to that are bigger than the top side - most clustering environments are unlikely to have intermittent network connectivity failures on any kind of a regular basis. I would just document the possible failure modes (this is already a pretty advanced use case).
              Hide
              emerrill Eric Merrill added a comment -

              I've added this as a sub to the clustering Epic, MDL-40979, because I think this belongs. Someone can remove it if they think otherwise.

              Show
              emerrill Eric Merrill added a comment - I've added this as a sub to the clustering Epic, MDL-40979 , because I think this belongs. Someone can remove it if they think otherwise.
              Hide
              emerrill Eric Merrill added a comment -

              Ok, I ran some tests.
              Setup:
              local server - running apache/php and postgres and memcached
              remote server - running ntfs and memcached
              They are connected over gigabit through a switch, ping between ~0.45ms.

              Running 50 loads of course view:
              With memcached (set for langs and db schema) on remote server I get 188ms page creation time.
              With clustered memcached (local fetch, local and remote set, set for langs and db schema), I get 169ms.
              A difference of 19ms, or ~10%.

              I could also save another ms or two (and network overhead) if I don't ping the set servers on plugin setup - still thinking about that.

              I also ran the performance comparison, see attached Cluster-performance.png, with 3 runs:
              Blue - Stock config, with dataroot on remote server.
              Red - Memcached cache on remote server for Lang Strings and DB Schema. Otherwise identical.
              Orange - Memcached Cluster cache on local server, with setting on local and remote server, used for Lang Strings and DB Schema. Otherwise identical.

              This shows similar behavior, with about a 10-15% savings over memcached, and ~20% over ntfs data cache.

              What isn't shown here is the ability to move loading off of a single point (the central memcached server) - and while memcached is not resource intensive in general, you also have network setup and tear down among other things.

              Show
              emerrill Eric Merrill added a comment - Ok, I ran some tests. Setup: local server - running apache/php and postgres and memcached remote server - running ntfs and memcached They are connected over gigabit through a switch, ping between ~0.45ms. Running 50 loads of course view: With memcached (set for langs and db schema) on remote server I get 188ms page creation time. With clustered memcached (local fetch, local and remote set, set for langs and db schema), I get 169ms. A difference of 19ms, or ~10%. I could also save another ms or two (and network overhead) if I don't ping the set servers on plugin setup - still thinking about that. I also ran the performance comparison, see attached Cluster-performance.png, with 3 runs: Blue - Stock config, with dataroot on remote server. Red - Memcached cache on remote server for Lang Strings and DB Schema. Otherwise identical. Orange - Memcached Cluster cache on local server, with setting on local and remote server, used for Lang Strings and DB Schema. Otherwise identical. This shows similar behavior, with about a 10-15% savings over memcached, and ~20% over ntfs data cache. What isn't shown here is the ability to move loading off of a single point (the central memcached server) - and while memcached is not resource intensive in general, you also have network setup and tear down among other things.
              Hide
              emerrill Eric Merrill added a comment -

              As a note for the attached screen shot - you can see the expected behavior that the first page load is slower than standard memcached, due to having set to set two servers when priming the cache.

              Show
              emerrill Eric Merrill added a comment - As a note for the attached screen shot - you can see the expected behavior that the first page load is slower than standard memcached, due to having set to set two servers when priming the cache.
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment - - edited

              Wow, nice job Eric, it's looking promising isn't it? An overall look to the memcached implementation says it's clear and clever IMO.

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - - edited Wow, nice job Eric, it's looking promising isn't it? An overall look to the memcached implementation says it's clear and clever IMO.
              Hide
              emerrill Eric Merrill added a comment -

              Thanks!

              I think it is. The more I work on this, the more I think this concept is the proper way to go for this type of configuration - pretty much all of the benefit, with a much much smaller risk profile.

              Show
              emerrill Eric Merrill added a comment - Thanks! I think it is. The more I work on this, the more I think this concept is the proper way to go for this type of configuration - pretty much all of the benefit, with a much much smaller risk profile.
              Hide
              emerrill Eric Merrill added a comment -

              The path of this ticket has changed quite a bit, so I've edited the description and title to better reflect the current path. See the history to see originals if interested.

              Show
              emerrill Eric Merrill added a comment - The path of this ticket has changed quite a bit, so I've edited the description and title to better reflect the current path. See the history to see originals if interested.
              Hide
              quen Sam Marshall added a comment -

              Thanks for this work and the performance testing, sounds great. (By the way it would be great if Moodle HQ's standard performance testing could include a setup with web server on different hardware from the other pieces - NFS filesystem, database, and memcache - too!)

              I was curious as to why Eric's performance testing showed less improvement than I'd expect based on my comment above. (Of course a 10% improvement is worth having anyway, just that it's different.) After looking at it again I realised I'd made a basic mistake: the ~400 string loads were when logged in as admin. Oops. Sorry. Guess that's the settings menu.

              Logged in as student I only see 44 loads from cachestore_file for strings (10 for database_meta, 3 yuimodules, 2 coursemodinfo, 1 langmenu). So, given that total of 54 items, a saving of 19ms sounds just as expected.

              Show
              quen Sam Marshall added a comment - Thanks for this work and the performance testing, sounds great. (By the way it would be great if Moodle HQ's standard performance testing could include a setup with web server on different hardware from the other pieces - NFS filesystem, database, and memcache - too!) I was curious as to why Eric's performance testing showed less improvement than I'd expect based on my comment above. (Of course a 10% improvement is worth having anyway, just that it's different.) After looking at it again I realised I'd made a basic mistake: the ~400 string loads were when logged in as admin. Oops. Sorry. Guess that's the settings menu. Logged in as student I only see 44 loads from cachestore_file for strings (10 for database_meta, 3 yuimodules, 2 coursemodinfo, 1 langmenu). So, given that total of 54 items, a saving of 19ms sounds just as expected.
              Hide
              emerrill Eric Merrill added a comment -

              Yup - admin vs regular uses is exactly why. For admins the benefit is much more drastic.

              Show
              emerrill Eric Merrill added a comment - Yup - admin vs regular uses is exactly why. For admins the benefit is much more drastic.
              Hide
              emerrill Eric Merrill added a comment -

              Ok, updated ticket with new branch - has two commits, one for memcacehdcluster and one for memcachecluster.

              Unit tests in place, and I think all the code is cleaned up.

              Show
              emerrill Eric Merrill added a comment - Ok, updated ticket with new branch - has two commits, one for memcacehdcluster and one for memcachecluster. Unit tests in place, and I think all the code is cleaned up.
              Hide
              quen Sam Marshall added a comment -

              I had a quick look at one of the commits. It looked basically fine, but I do have some questions (which you may have already answered, apologies if I didn't remember it from an earlier comment here):

              1) Is it necessary to make this into a different plugin - wouldn't it keep things simpler to modify the existing memcache plugin with an additional optional setting for 'put' servers? (Bad enough that there are two memcache plugins, four seems excessive.)

              2) Unless I'm misunderstanding, it looks like a lot of lang strings are same as from standard memcache; are these actually used? If not can they be deleted, or if they are really needed, I think you can use AMOS script to make it copy the existing strings for all translations.

              3) Trivia: I think =& is not necessary since those are objects, so it should probably just be =, unless I misunderstood.

              As noted before I'm really positive about this change, in fact I think it's critical for large systems (especially if MUC usage increases).

              Show
              quen Sam Marshall added a comment - I had a quick look at one of the commits. It looked basically fine, but I do have some questions (which you may have already answered, apologies if I didn't remember it from an earlier comment here): 1) Is it necessary to make this into a different plugin - wouldn't it keep things simpler to modify the existing memcache plugin with an additional optional setting for 'put' servers? (Bad enough that there are two memcache plugins, four seems excessive.) 2) Unless I'm misunderstanding, it looks like a lot of lang strings are same as from standard memcache; are these actually used? If not can they be deleted, or if they are really needed, I think you can use AMOS script to make it copy the existing strings for all translations. 3) Trivia: I think =& is not necessary since those are objects, so it should probably just be =, unless I misunderstood. As noted before I'm really positive about this change, in fact I think it's critical for large systems (especially if MUC usage increases).
              Hide
              emerrill Eric Merrill added a comment -

              1) Really the sub-classing just got in my head somehow, and so that's what I did - I don't think there was a particularly strong reason. That being said, I agree about the number of memcache plugins, and it would be quite easy to move the logic into the core memcache(d) plugins if you think that is the better way to go.

              2) Yup, forgot to clean that up, will do.

              3) Re-reading the php docs, you are right. For some reason it slipped my mind that objects are different (even though I know that's how it works. D'oh). I'll fix.

              Show
              emerrill Eric Merrill added a comment - 1) Really the sub-classing just got in my head somehow, and so that's what I did - I don't think there was a particularly strong reason. That being said, I agree about the number of memcache plugins, and it would be quite easy to move the logic into the core memcache(d) plugins if you think that is the better way to go. 2) Yup, forgot to clean that up, will do. 3) Re-reading the php docs, you are right. For some reason it slipped my mind that objects are different (even though I know that's how it works. D'oh). I'll fix.
              Hide
              emerrill Eric Merrill added a comment -

              After some talk between Eloy, Sam, and I, we decided that for now I'm going to release this as a contrib to get feedback and testing, and later in the 2.7 dev cycle we'll talk again about if (and how) this should be brought into code.

              I've setup githubs at:
              https://github.com/merrill-oakland/cachestore_memcachedcluster
              https://github.com/merrill-oakland/cachestore_memcachecluster

              Show
              emerrill Eric Merrill added a comment - After some talk between Eloy, Sam, and I, we decided that for now I'm going to release this as a contrib to get feedback and testing, and later in the 2.7 dev cycle we'll talk again about if (and how) this should be brought into code. I've setup githubs at: https://github.com/merrill-oakland/cachestore_memcachedcluster https://github.com/merrill-oakland/cachestore_memcachecluster
              Hide
              quen Sam Marshall added a comment -

              For info, I've put this (the non-d version) into the OU's codebase now. It seems to work - this depends on other people but I'm hoping we can try this out during our acceptance test period in November, and potentially use it on our live system in December. I'll comment back if/when we have results.

              Show
              quen Sam Marshall added a comment - For info, I've put this (the non-d version) into the OU's codebase now. It seems to work - this depends on other people but I'm hoping we can try this out during our acceptance test period in November, and potentially use it on our live system in December. I'll comment back if/when we have results.
              Hide
              emerrill Eric Merrill added a comment -

              We've put this live on 2.5 today at Oakland.

              Show
              emerrill Eric Merrill added a comment - We've put this live on 2.5 today at Oakland.
              Hide
              quen Sam Marshall added a comment -

              Couple of things:

              1. We're now doing performance testing on this (comparing against our previous approach of just caching stuff on localhost using the standard 'memcache' store, which has obvious functionality disadvantages such as not being able to purge caches from moodle interface). Performance should be the same provided there are essentially no writes, and excepting a slight increase in startup cost as it pings multiple servers, but we're testing it anyway. Results are inconclusive so far, I'll update when we have anything.

              2. I filed issue MDL-42839 regarding inadequate tolerance for whitespace in the config settings for memcache/memcached - this plugin already has better whitespace handling, but might consider allowing blank lines (see my code in that issue).

              Show
              quen Sam Marshall added a comment - Couple of things: 1. We're now doing performance testing on this (comparing against our previous approach of just caching stuff on localhost using the standard 'memcache' store, which has obvious functionality disadvantages such as not being able to purge caches from moodle interface). Performance should be the same provided there are essentially no writes, and excepting a slight increase in startup cost as it pings multiple servers, but we're testing it anyway. Results are inconclusive so far, I'll update when we have anything. 2. I filed issue MDL-42839 regarding inadequate tolerance for whitespace in the config settings for memcache/memcached - this plugin already has better whitespace handling, but might consider allowing blank lines (see my code in that issue).
              Hide
              emerrill Eric Merrill added a comment -

              We've been using these for ~6 months and they seem to work very well. But from the dev chat, some integrators want it to be a setting for the existing stores, so I'm going to re-write that over the next few days. Hopefully we can come to some form of agreement and get this integrated early in the 2.8 cycle. Stay tuned.

              Show
              emerrill Eric Merrill added a comment - We've been using these for ~6 months and they seem to work very well. But from the dev chat, some integrators want it to be a setting for the existing stores, so I'm going to re-write that over the next few days. Hopefully we can come to some form of agreement and get this integrated early in the 2.8 cycle. Stay tuned.
              Hide
              timhunt Tim Hunt added a comment -

              We have been using them for a while too. They work nicely. I don't see a strong case for change.

              Show
              timhunt Tim Hunt added a comment - We have been using them for a while too. They work nicely. I don't see a strong case for change.
              Hide
              quen Sam Marshall added a comment -

              I agree with integrators that it makes more sense as a setting for the existing memcache stores (if it goes into core). This will be a nice improvement in core (although given MDL-45375 we may need to move to redis in the long term - we're doing okay with memcache and memcachecluster now though).

              Show
              quen Sam Marshall added a comment - I agree with integrators that it makes more sense as a setting for the existing memcache stores (if it goes into core). This will be a nice improvement in core (although given MDL-45375 we may need to move to redis in the long term - we're doing okay with memcache and memcachecluster now though).
              Hide
              emerrill Eric Merrill added a comment -

              I've added a new revision of this, that modifies the memcached store to add the functionality.

              I've included pretty expansive PHPUnit testing on both the new code and the old code. To test you just need to set TEST_CACHESTORE_MEMCACHED_TESTSERVERS to multiple (return delimited) servers.

              I'm a little unsure of the verbiage for the settings - so I'm very open to input there. Reality is that this will need a pretty good doc page (which I will write) to fully explain the functionality.

              Once this is more vetted, I'll add the same for memcache.

              Show
              emerrill Eric Merrill added a comment - I've added a new revision of this, that modifies the memcached store to add the functionality. I've included pretty expansive PHPUnit testing on both the new code and the old code. To test you just need to set TEST_CACHESTORE_MEMCACHED_TESTSERVERS to multiple (return delimited) servers. I'm a little unsure of the verbiage for the settings - so I'm very open to input there. Reality is that this will need a pretty good doc page (which I will write) to fully explain the functionality. Once this is more vetted, I'll add the same for memcache.
              Show
              cibot CiBoT added a comment - Results for MDL-42071 Remote repository: https://github.com/merrill-oakland/moodle.git Remote branch MDL-42071 _MK3 to be integrated into upstream master Executed job http://integration.moodle.org/job/Precheck%20remote%20branch/3256 Details: http://integration.moodle.org/job/Precheck%20remote%20branch/3256/artifact/work/smurf.html
              Show
              cibot CiBoT added a comment - Results for MDL-42071 Remote repository: https://github.com/merrill-oakland/moodle.git Remote branch MDL-42071 _MK3 to be integrated into upstream master Executed job http://integration.moodle.org/job/Precheck%20remote%20branch/3257 Details: http://integration.moodle.org/job/Precheck%20remote%20branch/3257/artifact/work/smurf.html
              Hide
              moodle.com moodle.com added a comment -

              It would be good if this could be reconsidered and re-reviewed.

              Show
              moodle.com moodle.com added a comment - It would be good if this could be reconsidered and re-reviewed.
              Hide
              stronk7 Eloy Lafuente (stronk7) added a comment -

              I'd vote +1 to promote this to core, replacing current memcache/memcached implementation if it's guaranteed the clustered are able to work in single mode without a problem. So, for the masses, it will be just a new option in their already existing stores.

              Show
              stronk7 Eloy Lafuente (stronk7) added a comment - I'd vote +1 to promote this to core, replacing current memcache/memcached implementation if it's guaranteed the clustered are able to work in single mode without a problem. So, for the masses, it will be just a new option in their already existing stores.
              Hide
              poltawski Dan Poltawski added a comment -

              Code looks good to me, sending for integration

              (sorry I didn't notice it before)

              Show
              poltawski Dan Poltawski added a comment - Code looks good to me, sending for integration (sorry I didn't notice it before)
              Hide
              emerrill Eric Merrill added a comment -

              Well that moved fast...

              I'll try and get this rebased and added for memcache this evening before integration starts. If an integrator gets to this before I've updated it, we can push it off til next week.

              Show
              emerrill Eric Merrill added a comment - Well that moved fast... I'll try and get this rebased and added for memcache this evening before integration starts. If an integrator gets to this before I've updated it, we can push it off til next week.
              Hide
              cibot CiBoT added a comment -

              Moving this issue to current integration cycle, will be reviewed soon. Thanks for the hard work!

              Show
              cibot CiBoT added a comment - Moving this issue to current integration cycle, will be reviewed soon. Thanks for the hard work!
              Hide
              emerrill Eric Merrill added a comment - - edited

              Ok, rebased, added memcache, and updated test instructions. I've left memcache and memcached as separate commits, because that seems to make more sense to me, but they can easily be squashed if that is what the integrator prefers.

              Show
              emerrill Eric Merrill added a comment - - edited Ok, rebased, added memcache, and updated test instructions. I've left memcache and memcached as separate commits, because that seems to make more sense to me, but they can easily be squashed if that is what the integrator prefers.
              Hide
              samhemelryk Sam Hemelryk added a comment -

              Thanks Eric after really looking into this I can honestly say I really like it and have integrated it.

              There was only one thing that I kept coming back to. I think the requirement to have the server(s) used also included in setservers should probably be better notified. I imagine things will be really not great at all if it is forgotten and presently it is marked clearly within the help - but if that is not looked at would not necessarily be seen.
              After thinking about it I see no harm in opening a new issue for that so in this goes and I'll open a new issue.

              Also just noting that documentation will need to be updated. I updated the cache docs only recently:

              Perhaps this would provide reason enough to create dedicated pages for these cache stores so that their uses can be properly documented.
              I'll add the docs_requried label to this shortly.

              Cheers
              Sam

              Show
              samhemelryk Sam Hemelryk added a comment - Thanks Eric after really looking into this I can honestly say I really like it and have integrated it. There was only one thing that I kept coming back to. I think the requirement to have the server(s) used also included in setservers should probably be better notified. I imagine things will be really not great at all if it is forgotten and presently it is marked clearly within the help - but if that is not looked at would not necessarily be seen. After thinking about it I see no harm in opening a new issue for that so in this goes and I'll open a new issue. Also just noting that documentation will need to be updated. I updated the cache docs only recently: http://docs.moodle.org/27/en/Caching#Memcache http://docs.moodle.org/27/en/Caching#Memcached Perhaps this would provide reason enough to create dedicated pages for these cache stores so that their uses can be properly documented. I'll add the docs_requried label to this shortly. Cheers Sam
              Hide
              samhemelryk Sam Hemelryk added a comment -

              Linked to MDL-46213 to see the notification improved.

              Show
              samhemelryk Sam Hemelryk added a comment - Linked to MDL-46213 to see the notification improved.
              Hide
              poltawski Dan Poltawski added a comment - - edited

              Failing as, the test isn't coping with the memcached variable not being defined:

              There were 2 errors:
               
              1) cachestore_memcache_test::test_clustered
              Use of undefined constant TEST_CACHESTORE_MEMCACHE_TESTSERVERS - assumed 'TEST_CACHESTORE_MEMCACHE_TESTSERVERS'
               
              /Users/danp/moodles/im/moodle/cache/stores/memcache/tests/memcache_test.php:144
              /Users/danp/moodles/im/moodle/lib/phpunit/classes/advanced_testcase.php:80
               
              To re-run:
               vendor/bin/phpunit cachestore_memcache_test cache/stores/memcache/tests/memcache_test.php
               
              2) cachestore_memcached_test::test_clustered
              Use of undefined constant TEST_CACHESTORE_MEMCACHED_TESTSERVERS - assumed 'TEST_CACHESTORE_MEMCACHED_TESTSERVERS'
               
              /Users/danp/moodles/im/moodle/cache/stores/memcached/tests/memcached_test.php:144
              /Users/danp/moodles/im/moodle/lib/phpunit/classes/advanced_testcase.php:80
               
              To re-run:
               vendor/bin/phpunit cachestore_memcached_test cache/stores/memcached/tests/memcached_test.php
              

              Show
              poltawski Dan Poltawski added a comment - - edited Failing as, the test isn't coping with the memcached variable not being defined: There were 2 errors:   1) cachestore_memcache_test::test_clustered Use of undefined constant TEST_CACHESTORE_MEMCACHE_TESTSERVERS - assumed 'TEST_CACHESTORE_MEMCACHE_TESTSERVERS'   /Users/danp/moodles/im/moodle/cache/stores/memcache/tests/memcache_test.php:144 /Users/danp/moodles/im/moodle/lib/phpunit/classes/advanced_testcase.php:80   To re-run: vendor/bin/phpunit cachestore_memcache_test cache/stores/memcache/tests/memcache_test.php   2) cachestore_memcached_test::test_clustered Use of undefined constant TEST_CACHESTORE_MEMCACHED_TESTSERVERS - assumed 'TEST_CACHESTORE_MEMCACHED_TESTSERVERS'   /Users/danp/moodles/im/moodle/cache/stores/memcached/tests/memcached_test.php:144 /Users/danp/moodles/im/moodle/lib/phpunit/classes/advanced_testcase.php:80   To re-run: vendor/bin/phpunit cachestore_memcached_test cache/stores/memcached/tests/memcached_test.php
              Hide
              emerrill Eric Merrill added a comment -

              Sorry about that Dan Poltawski - I thought I had taken care of that :/

              Added a commit cd230b31b6b4026929feb71e0e4f131331c49ff6

              Show
              emerrill Eric Merrill added a comment - Sorry about that Dan Poltawski - I thought I had taken care of that :/ Added a commit cd230b31b6b4026929feb71e0e4f131331c49ff6
              Hide
              poltawski Dan Poltawski added a comment - - edited

              Thanks Eric Integrated and back to testing.

              Show
              poltawski Dan Poltawski added a comment - - edited Thanks Eric Integrated and back to testing.
              Hide
              dmonllao David Monllaó added a comment -

              Fatal error in cache/admin.php?action=addstore&plugin=memcached&sesskey=XXXXXXXXXXXX

              ( ! ) Fatal error: Can't use function return value in write context in /home/davidm/Desktop/moodlecode/INTEGRATION/master/cache/stores/memcached/addinstanceform.php on line 102
              Call Stack
              #	Time	Memory	Function	Location
              1	0.0012	335560	{main}( )	../admin.php:0
              2	0.6652	48124552	cache_administration_helper::get_add_store_form( )	../admin.php:69
              

              Show
              dmonllao David Monllaó added a comment - Fatal error in cache/admin.php?action=addstore&plugin=memcached&sesskey=XXXXXXXXXXXX ( ! ) Fatal error: Can't use function return value in write context in /home/davidm/Desktop/moodlecode/INTEGRATION/master/cache/stores/memcached/addinstanceform.php on line 102 Call Stack # Time Memory Function Location 1 0.0012 335560 {main}( ) ../admin.php:0 2 0.6652 48124552 cache_administration_helper::get_add_store_form( ) ../admin.php:69
              Hide
              emerrill Eric Merrill added a comment -

              I do not see this error, turns out it is a difference between php 5.5 and old versions with empty(trim()).

              New commit added 9043a10907f17ef0a6145a83cdbc833c918fff20

              Show
              emerrill Eric Merrill added a comment - I do not see this error, turns out it is a difference between php 5.5 and old versions with empty(trim()). New commit added 9043a10907f17ef0a6145a83cdbc833c918fff20
              Hide
              samhemelryk Sam Hemelryk added a comment -

              Aha thanks for fixing that Eric, I looked at that interface as well and didn't pick it up.

              Sending this back up for testing again now that the this has been fixed.

              Show
              samhemelryk Sam Hemelryk added a comment - Aha thanks for fixing that Eric, I looked at that interface as well and didn't pick it up. Sending this back up for testing again now that the this has been fixed.
              Hide
              dmonllao David Monllaó added a comment -

              Hi, thanks for working on this, I'm not sure about how I've checked #8, pasting my results and waiting for comments to pass. All seems good.

              Memcached
              ---------

              In #8 I see:

              echo stats | nc 127.0.0.1 11211 | grep cmd_
              STAT cmd_get 500
              STAT cmd_set 250
              STAT cmd_flush 0
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11211 | grep _misses
              STAT get_misses 33
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
               
              echo stats | nc 127.0.0.1 11210 | grep cmd_
              STAT cmd_get 0
              STAT cmd_set 33
              STAT cmd_flush 0
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11210 | grep _misses
              STAT get_misses 0
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
              

              In #10 after running admin/cli/purge_cache.php

              echo stats | nc 127.0.0.1 11211 | grep cmd_
              STAT cmd_get 502
              STAT cmd_set 253
              STAT cmd_flush 3
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11211 | grep _misses
              STAT get_misses 34
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
               
              echo stats | nc 127.0.0.1 11210 | grep cmd_
              STAT cmd_get 0
              STAT cmd_set 34
              STAT cmd_flush 3
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11210 | grep _misses
              STAT get_misses 0
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
              

              Memcache
              --------

              In #8

              echo stats | nc 127.0.0.1 11211 | grep cmd_
              STAT cmd_get 494
              STAT cmd_set 247
              STAT cmd_flush 0
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11211 | grep misses
              STAT get_misses 33
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
               
              echo stats | nc 127.0.0.1 11210 | grep cmd_
              STAT cmd_get 0
              STAT cmd_set 33
              STAT cmd_flush 0
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11210 | grep misses
              STAT get_misses 0
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
              

              In #10 after running admin/cli/purge_cache.php

              echo stats | nc 127.0.0.1 11211 | grep cmd_
              STAT cmd_get 496
              STAT cmd_set 250
              STAT cmd_flush 2
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11211 | grep misses
              STAT get_misses 34
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
               
              echo stats | nc 127.0.0.1 11210 | grep cmd_
              STAT cmd_get 0
              STAT cmd_set 34
              STAT cmd_flush 2
              STAT cmd_touch 0
               
              echo stats | nc 127.0.0.1 11210 | grep misses
              STAT get_misses 0
              STAT delete_misses 0
              STAT incr_misses 0
              STAT decr_misses 0
              STAT cas_misses 0
              STAT touch_misses 0
              

              Skipped unit test:

              vendor/bin/phpunit cachestore_memcache_test cache/stores/memcache/tests/memcache_test.php
              Moodle 2.8dev (Build: 20140626), pgsql, 41dcab66d1d55aa0141f79872e82795235e35b42
              PHPUnit 3.7.37 by Sebastian Bergmann.
               
              Configuration read from /home/davidm/Desktop/moodlecode/INTEGRATION/master/phpunit.xml
               
              ..S
               
              Time: 1.74 seconds, Memory: 54.50Mb
               
              OK, but incomplete or skipped tests!
              Tests: 3, Assertions: 391, Skipped: 1.
              

              Show
              dmonllao David Monllaó added a comment - Hi, thanks for working on this, I'm not sure about how I've checked #8, pasting my results and waiting for comments to pass. All seems good. Memcached --------- In #8 I see: echo stats | nc 127.0.0.1 11211 | grep cmd_ STAT cmd_get 500 STAT cmd_set 250 STAT cmd_flush 0 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11211 | grep _misses STAT get_misses 33 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0   echo stats | nc 127.0.0.1 11210 | grep cmd_ STAT cmd_get 0 STAT cmd_set 33 STAT cmd_flush 0 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11210 | grep _misses STAT get_misses 0 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0 In #10 after running admin/cli/purge_cache.php echo stats | nc 127.0.0.1 11211 | grep cmd_ STAT cmd_get 502 STAT cmd_set 253 STAT cmd_flush 3 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11211 | grep _misses STAT get_misses 34 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0   echo stats | nc 127.0.0.1 11210 | grep cmd_ STAT cmd_get 0 STAT cmd_set 34 STAT cmd_flush 3 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11210 | grep _misses STAT get_misses 0 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0 Memcache -------- In #8 echo stats | nc 127.0.0.1 11211 | grep cmd_ STAT cmd_get 494 STAT cmd_set 247 STAT cmd_flush 0 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11211 | grep misses STAT get_misses 33 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0   echo stats | nc 127.0.0.1 11210 | grep cmd_ STAT cmd_get 0 STAT cmd_set 33 STAT cmd_flush 0 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11210 | grep misses STAT get_misses 0 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0 In #10 after running admin/cli/purge_cache.php echo stats | nc 127.0.0.1 11211 | grep cmd_ STAT cmd_get 496 STAT cmd_set 250 STAT cmd_flush 2 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11211 | grep misses STAT get_misses 34 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0   echo stats | nc 127.0.0.1 11210 | grep cmd_ STAT cmd_get 0 STAT cmd_set 34 STAT cmd_flush 2 STAT cmd_touch 0   echo stats | nc 127.0.0.1 11210 | grep misses STAT get_misses 0 STAT delete_misses 0 STAT incr_misses 0 STAT decr_misses 0 STAT cas_misses 0 STAT touch_misses 0 Skipped unit test: vendor/bin/phpunit cachestore_memcache_test cache/stores/memcache/tests/memcache_test.php Moodle 2.8dev (Build: 20140626), pgsql, 41dcab66d1d55aa0141f79872e82795235e35b42 PHPUnit 3.7.37 by Sebastian Bergmann.   Configuration read from /home/davidm/Desktop/moodlecode/INTEGRATION/master/phpunit.xml   ..S   Time: 1.74 seconds, Memory: 54.50Mb   OK, but incomplete or skipped tests! Tests: 3, Assertions: 391, Skipped: 1.
              Hide
              emerrill Eric Merrill added a comment -

              Sorry I wasn't clear, #8 you should be looking at the bottom of the page in the performance info. But all your numbers look as expected.

              The skipped test in inheritted from cachestores_test, and it skipped in master already (that may need looking into separately).

              Show
              emerrill Eric Merrill added a comment - Sorry I wasn't clear, #8 you should be looking at the bottom of the page in the performance info. But all your numbers look as expected. The skipped test in inheritted from cachestores_test, and it skipped in master already (that may need looking into separately).
              Hide
              dmonllao David Monllaó added a comment -

              Thanks for the reply Eric, passing it.

              Show
              dmonllao David Monllaó added a comment - Thanks for the reply Eric, passing it.
              Hide
              poltawski Dan Poltawski added a comment -

              This change is now part of Moodle! Thanks for your contribution!

              Before software can be reusable it first has to be usable.
              --Ralph Johnson

              Show
              poltawski Dan Poltawski added a comment - This change is now part of Moodle! Thanks for your contribution! Before software can be reusable it first has to be usable. --Ralph Johnson
              Hide
              damyon Damyon Wiese added a comment -

              This caused a regression for me in the settings for the memcache store:

              Notice: Undefined property: stdClass::$setservers in /home/damyonw/Documents/Moodle/integration/master/moodle/cache/stores/memcache/lib.php on line 442

              Show
              damyon Damyon Wiese added a comment - This caused a regression for me in the settings for the memcache store: Notice: Undefined property: stdClass::$setservers in /home/damyonw/Documents/Moodle/integration/master/moodle/cache/stores/memcache/lib.php on line 442
              Hide
              emerrill Eric Merrill added a comment -

              Opened MDL-46802

              Show
              emerrill Eric Merrill added a comment - Opened MDL-46802
              Hide
              emerrill Eric Merrill added a comment -

              Sorry, didn't see the ticket you opened... :/

              Show
              emerrill Eric Merrill added a comment - Sorry, didn't see the ticket you opened... :/
              Hide
              marycooch Mary Cooch added a comment -

              I see there is a docs_required label here and that we have had some updating done to https://docs.moodle.org/27/en/Caching
              Are we OK to remove the docs_required label or does more need to be done?

              Show
              marycooch Mary Cooch added a comment - I see there is a docs_required label here and that we have had some updating done to https://docs.moodle.org/27/en/Caching Are we OK to remove the docs_required label or does more need to be done?
              Hide
              marycooch Mary Cooch added a comment -

              Removing docs_required label as more than 3 months have elapsed since my previous comment.
              If there is anything which needs to be added to the user documentation, please comment and re-add the docs_required label.

              Show
              marycooch Mary Cooch added a comment - Removing docs_required label as more than 3 months have elapsed since my previous comment. If there is anything which needs to be added to the user documentation, please comment and re-add the docs_required label.

                People

                • Votes:
                  4 Vote for this issue
                  Watchers:
                  19 Start watching this issue

                  Dates

                  • Created:
                    Updated:
                    Resolved:
                    Fix Release Date:
                    10/Nov/14