-
Improvement
-
Resolution: Won't Do
-
Major
-
None
-
3.1, 3.2
-
None
-
MOODLE_31_STABLE, MOODLE_32_STABLE
This is a followup of MDL-53758, where this was commented and, ultimately, decided to be considered in separated issue.
Basically, right now the search engines always do process all documents, from the first one up to the needed one to fill a given page N. So, internally, for getting page number 2, we are really performing all the calculations needed to both return page 1 (skipping all them) and page 2. And it becomes worse and worse while the page number grows.
In the other side, all the search engines are able to "know" the number of documents (X) that were needed to process a given page N. So it sounds natural that, in order to get the results from page N+1 we can safely skip those X documents if the engine supports it.
So, this is basically about to articulate some way to enable engines to be aware of the processed X documents when a page N is required for a given XXXXX search. And allow them to use that information for quicker access to page N+1.
Of course, if the information is not available... then current approach should be followed, processing all documents from the 1st one. But for normal navigation, where going to next page is the most used option, it could have a dramatic effect when retrieving pages > 1.
Finally, it seems that both Solr (via setStart() ) and also SQL-based backends (via LIMIT clause) do support quick skipping, so it sounds like a win situation for at very least those engines.
Surely the main point is to think about the "storage" to be used in order to keep all those (search hash + pagenum + docs processed) tuples, their TTLs and so on.
Original comments in the linked issue start here.
And... that's it, ciao
- has been marked as being related by
-
MDL-53758 Global Search Filling and Performance Issues
-
- Closed
-