This is almost the same pattern as MDL-68481, broadly the issue is this synchronously zips and download s massive chunk of files which can take ages and either hit or force various timeouts to be increased.
1) the zip preparation is done as a first step with a progress bar which then redirects to the second step which downloads. As the download is in a second http request it would need to copy the zip into shared disk and then clean it up
2) both steps above can close the session
3) bonus points (might be contentious): every time an assignment is uploaded then an ad hoc task can be rescheduled / queued for some healthy point into the future, say as 6 hours, which does the zip step ahead of time and stores it in the file api. By choosing 6 hours then many constant assignment uploads keep pushing back the final zip until it quiets down which should happen on the due date, lets say at midnight, and then the next day (assuming no late submissions) the zip is ready to go for the markers to download:
If anyone still wants it in the mean time then you redo step 1 synchronously, and when it is done it would also de-queue the ad hoc task as it isn't needed any more. Most assignment files will not compress well because they are already compressed, so if this gets implemented maybe it could be behind a setting as it is trading off disk space vs download and generation time. If this setting is on then it would NOT need to cleanup the files in step 1, but we may want to clean up old zip files from old assignments so reduce disk.