Problem/Motivation
Inspired while thinking about #3571858: Optimize OptimizedPhpArrayDumper further I realised we can make even bigger savings by swapping to Symfony's PhpDumper. This means we don't have to deserialize a container at all - the built container can be cached in the PHP opcache.
This was previously implemented very early in Drupal 8, but later removed in #2497243: Replace Symfony container with a Drupal one, stored in cache due to #2547827: PhpStorage: past, present and future.
I think the problems described in #2547827: PhpStorage: past, present and future are solvable. By hashing the compiled PHP, we can store that hash in the database as validation that the container is the current one - this means we still need a trip to the database, but we only retrieve the stored container hash instead of a serialized container. We can also use the hash as part of the container class name to handle the case where multiple containers need to exist for container rebuilds (module install/uninstall) at runtime.
Rationale
- Quicker responses for both cached (88% faster) and uncached pages (3.5% faster) - see #3583505-75: Use Symfony PhpDumper instead of a serialized array container structure
- Leverage additional container capabilities in the most performant way without having to reengineer them in Drupal - for example this will allow #3249970: [PP-1] Support setting service parameters via environment variables] to use the PHP dumped containers optimisations to get parameters as env vars at runtime without as much complexity in Drupal.
Proposed resolution
Introduce a ContainerStorageInterface that abstracts how compiled service containers are loaded, dumped, and invalidated. Three implementations are provided:
PhpDumperContainerStorage— uses Symfony'sPhpDumperto generate a compiled PHP class stored viaPhpStorage. The compiled class benefits from
OPcache.CacheContainerStorage— preserves the legacy behavior usingOptimizedPhpArrayDumperand the cache backend.NullContainerStorage— no-op storage for the installer andUpdateKernel.
A ContainerStorageFactory auto-detects the implementation from the container_base_class setting. The active storage implementation is a service in the bootstrap container; sites can substitute their own ContainerStorageInterface implementation by overriding the container_storage service in $settings['bootstrap_container_definition'].
DrupalKernel::initializeContainer() is simplified to delegate to getContainerStorage() for loading and dumping, removing the $containerNeedsDumping flag and inline cache logic. The getContainerCacheKeySuffix() method replaces the deprecated getContainerCacheKey(), and now includes app.root and site.path in the hash. New sites default to PhpContainer in default.settings.php.
Remaining tasks
- Review
User interface changes
A new "PHP OPcode container" row appears on the status report page (admin/reports/status) when the site uses the PhpContainer base class. It shows "Cached" (OK) or "Not cached" (Warning) depending on whether OPcache has cached the compiled container file.
Introduced terminology
- Container storage — the abstraction responsible for loading, dumping, and invalidating compiled service containers.
API changes
- New interface:
Drupal\Core\DependencyInjection\ContainerStorageInterfacewithload(),dump(), andinvalidate()methods. - New classes:
PhpDumperContainerStorage,CacheContainerStorage,NullContainerStorage,ContainerStorageFactory,
PhpContainer,FileLoaderStub. - New protected method:
DrupalKernel::getContainerCacheKeySuffix(). - New protected method:
DrupalKernel::getContainerStorage(). - Deprecated:
DrupalKernel::getContainerCacheKey()— usegetContainerCacheKeySuffix()instead. - Deprecated:
DrupalKernel::cacheDrupalContainer()— container dumping is now handled byContainerStorageInterface::dump(). - Deprecated:
DrupalKernelInterface::getCachedContainerDefinition()— container loading is now handled byContainerStorageInterface::load(). - Deprecated:
DrupalKernel::$phpArrayDumperClass— the container dumper is now managed byContainerStorageInterfaceimplementations. - Removed:
DrupalKernel::$containerNeedsDumpingproperty. UpdateKerneloverridesgetContainerStorage()(returningNullContainerStorage) instead ofcacheDrupalContainer().default.settings.phpnow setscontainer_base_classtoPhpContainer::classby default (active, not commented out).- The bootstrap container (
DrupalKernel::$defaultBootstrapContainerDefinition) gains five new services:settings,container_storage,container_storage.cache,container_storage.php_dumper, andphp_storage.service_container. When$settings['bootstrap_container_definition']is provided, its services are merged on top of the defaults so existing overrides continue to work. Override thecontainer_storageservice there to substitute a customContainerStorageInterfaceimplementation.
Data model changes
N/a
Release notes snippet
Drupal now supports Symfony's PhpDumper for the service container, generating a compiled PHP class that benefits from OPcache. New sites use this by default via the PhpContainer base class. Existing sites continue using the legacy PHP array container. To opt in, set $settings['container_base_class'] to \Drupal\Core\DependencyInjection\PhpContainer::class in settings.php. Container storage is now abstracted behind ContainerStorageInterface, and a status report check warns when the compiled container file is not cached by OPcache.
| Comment | File | Size | Author |
|---|
Issue fork drupal-3583505
Show commands
Start within a Git clone of the project using the version control instructions.
Or, if you do not have SSH keys set up on git.drupalcode.org:
Comments
Comment #3
longwaveMR!15371 was assisted by Claude Code.
Some xhprof numbers on a node page from a functional test:
Comment #4
longwaveComment #5
longwaveWe can make a similar optimisation in the bootstrap container, which is currently provided as an array definition that eventually only constructs one service; instead we can hardcode this as PHP to save parsing the array structures in the same way every time. The bootstrap container is still overridable via settings.php for backward compatibility.
Comment #6
longwaveTested again against PageCacheTest::testConditionalRequests:
Comment #7
amateescu commentedI think @chx's idea from #2497243-81: Replace Symfony container with a Drupal one, stored in cache is still worth exploring, mostly because of PhpStorage's inherent problems with slow shared filesystems.
Comment #8
longwaveCan't that be mitigated with opcache invalidation settings? If so hopefully we can just document what NFS users have to do. Alternatively we might be able to provide an alternative local path in settings.php?
Comment #9
amateescu commentedThat would still require each site builder/owner to know and think about this stuff, while the stream wrapper idea works pretty much out of the box :)
Comment #10
longwaveI thought we didn't want to store raw PHP in the database as it's a possible route to remote code execution. Otherwise we could already have stored the compiled PHP in cache?
Comment #11
nicxvan commentedHow does it compare with the other issue optimizations?
Comment #12
longwaveDrupal\Tests\system\Functional\System\AdminTest::testAdminPages()loads over 1200 services. For each branch I did two test runs and took the second run result:This also uses ~250Kb less memory.
Comment #13
catchCould this use MtimeProtectedFileStorage instead of the manual hash validation or was that already firmly rejected?
Checking the mtime over nfs would likely have some overhead but it possibly might not be worse than a database query.
Comment #14
longwave@catch PhpStorageFactory constructs an MTimeProtectedFileStorage instance by default (and can be overridden in settings.php).
The additional hash technique exists for the following reasons:
Choosing a hash of the built container as the identifier and class name fulfills all these requirements.
Comment #15
longwaveMight make sense to move the BootstrapContainer change out to its own issue, that can be done entirely separately - but I think it's worth doing given the container is technically hardcoded already, just as an array instead of PHP.
Comment #16
alexpott+1 to #15 - the change makes sense but it is not really connected to this.
Comment #17
longwaveNW to split out the bootstrap container and address the parameters review comment.
Comment #18
oily commentedRe: #3 and #13, reading through the Symfony phpdumper code at
https://github.com/symfony/dependency-injection/blob/8.1/Dumper/PhpDumpe...
I see that at line 125 is the $as_files option, 'To split the container in several files'.
Did the Drupal 8 implementation of PhpDumper split the container into multiple files? Would that option be used/ need to be used in this swap back to PhpDumper?
Could it impact performance?
Comment #19
catch@oily I can't see any reason to split into multiple files if we have to load it all on every request anyway, it would just mean more file i/o, is there a specific reason you're asking about this?
@longwave OK thanks for explaining, I think we should add some of that to inline comments in DrupalKernel::initializeContainer() though.
One question - do we need to add test coverage for having a container cache key, but no respective file on disk (file got manually deleted, database got backed up and moved to a different environment etc.)?
Comment #20
berdirDidn't review yet, just saw the comment from @catch in #3583040: Cache the container definition in APCu
> (no unserialize at all even from APCu)
not sure if that refers to the unserialize of the whole thing or the unserialize of each definition that we do. I quickly looked into the definition unserialize bit when testing redis compression and whether that's worth doing with that, but not doing it resulted in a fairly significant memory increase on cached pages, so we should definitely test that when making changes here.
Comment #21
catchI meant the unserialize of the whole thing in that comment, e.g. of the cache item itself.
Comment #22
longwaveI think the
$as_filesoption only loads the parts of the container it needs on demand. It might be more performant in some situations? But I presume the classloader will need to interact with MTimeProtectedFileStorage in some way to handle the on demand loading, so this is definitely followup material.Comment #23
alexpotthttps://symfony.com/doc/current/performance.html#performance-service-con... seems relevant here. I've tested what happens and we're definitely using a single file. I have concerns about the container for a large Drupal site being bigger than what opcache can handle. We'll need to be careful with that.
Things of interest... installing via drush does not result in the container being dumped. neither does running drush cr... however once you visit the site the container is dumped and then drush cr does generate a new container. Interesting. If you then remove the container files and run drush cr it's not regenerated until you visit the site. If you diff the container before and after the drush cr you will see something like:
So this drush service bleeding in to the container on drush cr is not a new issue - happens with the db backed container too...
What's super interesting is if you do two drush cr one after another the container is completely removed from the file system.
Note that going to the performance page and flushing the caches via the UI does regenerate the container as a fresh file each time.
Comment #24
alexpottCouple more things to note. If I switch to this branch on a live site and visit a page I get
The disappearing container after two drush cr's only happens with this branch not on HEAD.
Comment #25
dries commentedThis should fix @alexpott's problem. It should be very fast, but I'm not sure if we want it in the critical path:
Comment #26
longwaveI think that check is fine, and necessary to handle this problem.
Comment #27
dries commentedAlright, I just committed it.
Comment #28
dries commentedRe opcache size: I measured on Umami with DDEV. The compiled container is 703KB on disk and takes 1609KB in opcache.
I believe this is actually more memory efficient than the current serialized approach: today, every FPM worker unserializes the container definition into its own heap memory. The opcache is shared across all workers.
The main scenario where opcache size could be a concern is multisite?
Comment #29
catchI think with container size @alexpott meant https://www.php.net/manual/en/opcache.configuration.php#ini.opcache.max-...
However the default for that is 0 (unlimited) so it's probably OK? And we could add a hook_requirements() check.
There's the risk of the opcache filling up but we have plenty of open issues that can help with that (persistent discovery caches, dropping annotations support so the opcache can strip comments etc.).
Comment #30
catchFor the type issue can we add something extra to the container cache key maybe? But also is_string() is fine too.
Comment #31
oily commentedRe: #19 @catch
I was not sure if the option had been considered. I would have thought it best not to use the option unless it were needed, so agree with you on that. @longwave sees it as intended (in certain contexts) as a way of improving performance.
Re: #22 Interesting.. That option stood out for me. Was just wondering if it might be relevant. As a possible future enhancement makes sense.
Comment #32
alexpottI've pushed an alternate fix for #24 without the extra function check - just a slight tweak to the cache key - which also makes it more truthful - and we can avoid the extra part of the if.
@catch is correct I'm concerned about the impact of this when the container is huge - umami does not have a lot of contrib installed. I've seen Drupal sites with tonnes of modules installed and I'm wondering whether we should be concerned.
Comment #33
oily commentedI know its a bit off topic but since we are looking towards Symfony for performance enhancements and I have been experimenting for 6 months with the high performance symfony production version symfony docker: https://github.com/dunglas/symfony-docker which is based on FrankenPHP..
https://frankenphp.dev/docs/worker/ ie:
"Boot your application once and keep it in memory. FrankenPHP will handle incoming requests in a few milliseconds."
Symfony-docker serves as a dockerised local dev environment but the images can be rebuilt for production and deployed to production. That removes the dev tools like XDebug and optimises the PHP config etc DDEV could do the same for Drupal?
Comment #34
alexpottPushed up an idea of how to fix the problem with app root and site path... we can add them to the container cache key so if they change we'll build a fresh container.
Still need to work out what's going on with drush cr and the container removal.
Comment #35
alexpottOkay I've fixed the code so that it does rebuild and save the container when drush cr is used. Setting to needs review because I think all the things I've spotted have been fixed.
Comment #36
nicxvan commentedMaybe we can test after installing a couple hundred modules even if it's just manual.
Comment #37
andypostComment #38
catchIs #3585294: Convert bootstrap container from array to PHP exactly the same as the change here for the bootstrap container? I'm guessing the idea is to get that issue in first to have a smaller diff here, but wanted to double check.
Comment #39
longwaveYes, I extracted the change from here and moved it over there, but the basic idea is the same: the container becomes opcacheable PHP instead of an array that needs parsing at runtime. The issues can land in either order, they cover the same ground but the two containers are independent.
Comment #40
alexpottAdded a test to prove that adding the site path to the cache key works. The test passes on HEAD and on this MR but fails if you remove the changes to the \Drupal\Core\DrupalKernel::getContainerCacheKey().
Just need to work out how to add test coverage for the issue revealed by drush cr.
Comment #41
catchThought about this a bit:
database - can handle large objects, but noticeably slows down as they get bigger
APCu - can handle it, as long as the total memory allocated is enough (which a large container won't help with, but not worse than other things we put in there like thousands of config translations)
memcache - by default, can't handle anything over 1mb
redis - can handle it
opcache - by default can handle it, unless you set
opcache.max_file_sizetoo lowSo I feel like it'll be fine, at least relative to the other options we have, as long as PHP isn't configured to break it, it would be great if someone can apply this diff to a moderately large Drupal install and check though.
Comment #42
catchThat makes sense, but should we drop it from the MR here in that case?
edit: I see that's already been done, sorry.
Comment #43
longwaveI did already in https://git.drupalcode.org/project/drupal/-/merge_requests/15371/diffs?c... ?
Comment #44
catchDecided to check some of the larger PHP files that we already have in core so we know whether we're going to massively increase the maximum size of the largest PHP file. I did Drupal 10 and Drupal 7 because Drupal 11/12 OOP hooks means we've got a lot less massive files in general.
Didn't do anything special to find big files, just checked ones I already knew would be on the bigger end of what's in core with ls -lh
Drupal 10:
Drupal 7:
So common.inc in Drupal 7, which is loaded on every request, is 315k, and while Drupal 7 is unsupported we know a couple of hundred thousand or more sites run it in production with opcache.
The physical size of the file on disk doesn't necessarily map to opcode size but I don't know how to get that for a single file.
Comment #45
nicxvan commentedDoing a quick size test.
I enabled every core and about 30 test modules
This branch container size is
878Kon disk, only one container file is printed.Two entries in
cache_containerNot apples to apples, but I get this in the db on main for same modules:
Also, with standard profile and no additional modules the container size on disk for this branch is:
635KStandard profile on main db entries are
439.3KiBComment #46
longwaveAs we are concerned about container size I picked up #3397224: Event subscribers should be private by default again and made tests pass; I should run it with this branch merged in to see how much difference it makes.
Comment #47
alexpottI've added a requirements check to ensure that if you have opcache enabled the container is in it. I think this will help any sites which have really fined tuned their opcache settings get it right.
Which actually brings me to an interesting thing. When we invalidate the container we rebuild it and the write it to mtime protected storeage which even if it is exactly the same container will result in the file name changing. Which means a new entry in the opcache. I wonder if it is worth trying to do this in a better way.
Comment #48
alexpottI've addressed #47 and it is tested... this should result in quicker cache rebuilds too! No unnecessary container dumps when nothing changes... and the opcache will still work.
Comment #49
dries commentedI tested the opcache accumulation problem on DDEV with Umami. Each
invalidateContainer()+ rebuild added a new ~1.6MB opcache entry even when the container content was identical. After 3 rebuilds I had 4 stale entries (6.3MB) in opcache with only 1 file on disk. Withopcache.validate_timestamps=0, these entries are never evicted.I was working on a fix (skip save when hash is unchanged, remove
deleteAll()frominvalidateContainer()) but I see @alexpott just pushed essentially the same approach. Looks good!Comment #50
oily commentedhttps://symfony.com/doc/6.4/performance.html#dump-the-service-container-...
"Symfony compiles the service container into multiple small files by default."
Might be an obvious question, but does our implementation of PHPDumper compile the service container into a single file?
Comment #51
alexpottHere's a performance analysis of cache clearing done by Claude
70311491b27 is the commit before the fix to not delete the container if the hash does not change.
The test was clear the cache 5 times and do it 20 times while recording the time taken using drush.
Comment #52
alexpottRe #50 if it was going to multiple files it wouldn't work :) - but you're probably correct that we want to ensure this.
Comment #53
nicxvan commentedOnly remaining question I have is NFS, I have clients that use a shared nfs for multiple web heads, will this have an impact compared with a db container?
Comment #54
alexpott@nicxvan I think we also need to think about multiple web-heads... and what happens when a rebuild occurs. ie. How do we handle races to rebuild the container.
Comment #55
longwaveTwig templates have used the same mechanism for a long time (and there's many more of them) so if there was a race condition or performance issue on NFS I think we would know about it by now?
I feel like the new hash check should also help mitigate this but I haven't thought it through fully yet.
Comment #56
catchFeels like there's a higher chance of a race condition and stampedes with the container because it's so early in the request and guaranteed to be needed on every request. Also takes longer to build than a twig template.
Whereas a lot of twig templates end up behind render cache and there will be more variation in when requests reach the point of generating them, even during a stampede since it tends to be one of the last things that happens during a request. Also very hard to reproduce outside of production environments, although usually we do get reports of things like this eventually.
Whether there actually is one though is a different matter, it might be fine and it's going to be hard to rule in or out.
I think it depends what we're concerned about most - is it multiple requests building the container unnecessarily? Or is it that we could end up in some kind of death spiral where the mtime storage or cache key keeps getting invalidated causing another rebuild? Haven't reviewed mtime storage for a while or the latest code here to have a good grasp of what could go wrong.
The only mitigation I can think of is adding a lock service to the bootstrap container and then putting the container write inside a lock. Although the latest changes seem good to reduce repeated container building when the result is the same.
But then what do we do with requests that don't acquire the lock - wait for a bit then build the container without writing it if it's not ready yet? Because it's quite expensive to build there's also the risk of a lock stampede too.
@nicxvan I think NFS is going to come down to how expensive an mtime check is over nfs and also whether opcache suffers at all. However per @Dries apcu issue and profiling in there, even a small container is a solid couple of milliseconds to get from the database cache and larger ones could take 5ms or more, so there is probably some room for things to be comparable or better rather than worse.
Comment #57
longwaveCrossposting from #3397224: Event subscribers should be private by default which no longer seems particularly worth doing, but some interesting numbers came out regarding service instantiation:
Performance-wise private services make no difference, but it's clear that the PhpDumper change is an improvement. The below table is the average of instantiating the event_dispatcher service from scratch 1000 times:
3583505-use-symfony-phpdumper)The event_dispatcher is always instantiated, so if we are 2ms (5x) faster with PhpDumper just for this service then that's an obvious win to me.
Comment #58
longwaveIf we are concerned about the possible performance implications in different environments, maybe we can keep both array and PHP dumpers in use, selectable via settings.php? DrupalKernel already feels like it does too much so extracting the dump and cache related methods to two classes with a shared interface might even be cleaner than what we have now.
Comment #59
alexpott@longwave leaving support for the array container would help with the
container_base_classchange too. And then we could deprecate the array container in Drupal 13. We could install new sites using the php dumper and leave existing site on the array container and put something on the requirements in the issue that does the deprecation.Comment #60
catchWe could do this for 11.x, then switch to the dumper container by default for all Drupal 12 sites with settings.php as an opt-out back to the array container maybe? Treat the setting being
NULLas FALSE in 11.x and TRUE in 12.x.Assuming this gets into 11.4 we'll have at least a couple of months of data to make a final decision on that for 12.x too.
Comment #61
alexpott@catch we don't even need a new setting we can use the container_base_class setting :) - I've just pushed a commit that implements this with the help of claude.
Comment #62
catchMore NFS thoughts, apart from potential race conditions/stampedes feels like there's two things that could go wrong for general performance:
1. The baseline performance might be slow - by default there'll be an mtime check from mtime protected storage, then another one from opcache, then the class itself should come from opcache hopefully. This is probably not going to be slower than getting the entire container array out of the database but we're not sure.
2. If NFS performance gets degraded - heavy file writing due to a bulk file import or whatever else, could this be worse than database performance getting degraded under load. We would guess not due to twig templates but again can't be entirely sure.
For high traffic sites, there might be a way around for all of these:
We can suggest a set-up where raw PHP files are used instead of mtimeprotected storage, in a directory like vendor/drupal/container outside the webroot that will be read only on production.
This would allow for committing the PHP container to the codebase - either the active code base or in a deployment repo in a build step.
At least the old and new production containers would need to be in the code base, so that the site can smoothly switch from one to the other.
This would mean no NFS and no file writing.
Things that could go wrong:
- installing a module in a hook_update_N() or updating certain config can result in an intermediate container, e.g. different containers before, during, and after a deployment not only before and after. For the 'during' containers it might be fine if they never find their way onto disk, but not sure how to allow for that situation.
- need to make sure that sites can have this setup, but also use mtimeprotectedstorage for twig templates etc.
I don't think we need to document that here but it would be good to be relatively confident that it's going to be possible, and then open an issue to document the concept somewhere.
Comment #63
longwaveHad a thought that the container dumper and loader are basically bootstrap services, that perhaps belong in the bootstrap container, and then are swappable here for this case but also can be swapped further in contrib or custom code if necessary. Which might go towards solving @catch's notes above - sites that know what they are doing could even ship their runtime container in git and just hardcode that if they wanted. The kernel should have no direct knowledge of this at all, just an interface where it needs to set up the runtime container.
Comment #64
oily commentedRe: #63
When you deploy to production you will often be adding/ removing/ editing services. So not sure about storing and deploying the container in git. Would it be better to run a script that regenerates the container (once) on each deployment?
Comment #65
alexpottI've taken #63 and implemented with the help of Claude something along the lines that I think @longwave is suggesting.
Comment #66
catchWe won't be able to have a single container in git unless we fall back to container builder when the container isn't on the filesystem and expect that to work until updates and config import are finished. The main problem I see with that is that both maintenance mode and the page cache require the container so it is potentially a long window of a lot of requests having to build a container. My idea in #62 would also have that problem for a shorter window though if there are two or three container keys due to different module installs during a single deployment. Might be worth opening a follow-up nowish to discuss this more.
Comment #67
alexpottI'm super happy with where this now. I've updated the issue summary with help from Claude to reflect the current MR.
Comment #68
oily commentedRe: #66
and #63
Could some of the container operations discussed in #63 and #66 be tested out in a script in core/scripts that gathers info on the productions site and reports back any important info and recommendations regarding the container? No doubt would be in a follow-up. But could be an interim measure that '...sites that know what they are doing...' could take advantage of without imposing code solutions until the fruits of this issue have been in flight for a few months?..
It seems that at least some of the existing core scripts are Symfony commands. Which is pretty handy since the Symfony docs will help
'...sites that know what they are doing...' can thus expand on any ideas they get from scripts produced by Drupal core team.
Any scripts could be targetted to be run on deployment to production. They could issue warning for example if the latest changes to files might lead to problem with rebuild of or size of container..
Such script(s) would not need to cover all bases, more like draw attention to the complexity of the container, its importance for stable productions sites and to encourage more detailed scripts to be developed by devops/ pipeline staff.
Comment #69
alexpottComment #70
alexpottComment #71
alexpottI've renamed DumpedContainer to PhpContainer based on @longwave's comment
After doing this because I think it is a better name I had the thought "Well all containers are going to be PHP containers too... what we really mean here is a php file backed container." FWIW I think PhpContainer is fine but I thought I'd bring this up in case anyone has a great idea and also because it made me smile.
Comment #72
catchOpened #3585729: [PP-1] Document how to ship the dependency container in code for #62
Comment #73
dries commentedSymfony and Laravel store their compiled containers as plain PHP files without
MTimeProtectedFileStorage.Could we do the same here, or is there a specific threat model that requires mtime protection for the container? Asking because we may not need this requirement.
We already have the hash for cache invalidation purposes.
For tamper detection, if an attacker has write access, there is all kinds of damage they can do already, regardless of the mtime protection.
Minor:
PhpContainer::__sleep()usesassert()which is disabled in production. Should this be athrow?Comment #74
andypostBtw
__sleep()must be replaced as this is a new code #3548971: Deal with PHP 8.5 deprecation of __sleep()/__wakeup()Comment #75
alexpottDid some performance testing using apache workbench. On a node page without page cache or dynamic page cache enabled and the render cache set to a null backend.
Main
MR
I repeated the tests a few times and even on this very simple page with not that many services involved there is a measurable and consistent speed up.
With page cache...
What gets really really exciting is when you have page cache enabled...
Main
MR
Comment #76
longwaveThe CR needs finishing and also we need a release note snippet to tell end users that they can enable this in settings.php if they want to.
Comment #77
longwaveRelease note looks good to me.
Comment #78
alexpottI've created 3 change records with help from claude. They target different audiences:
Comment #79
alexpottRemoving container_storage_class from the issue summary - it no longer exists.
Comment #80
alexpottAdded info about the bootstrap container changes to the issue summary.
Comment #81
needs-review-queue-bot commentedThe Needs Review Queue Bot tested this issue. It fails the Drupal core commit checks. Therefore, this issue status is now "Needs work".
This does not mean that the patch necessarily needs to be re-rolled or the MR rebased. Read the Issue Summary, the issue tags and the latest discussion here to determine what needs to be done.
Consult the Drupal Contributor Guide to find step-by-step guides for working with issues.
Comment #82
alexpottComment #83
alexpottSo I've set up a docker compose file to test nfs plus multiple web heads. With the current MR we hit races on both the multiple web heads and in a single web head. This can result in the number of compiled containers growing and growing and growing.
Discussed with @berdir, @longwave, and @kingdutch - who made great suggestions that have resulted in exploring moving the containers to the system temp directory - still using mtime protection. Also we found that even with moving to a local file system we created multiple contains because most web servers have multiple workers which also get in a race. Therefore we need to implement some sort of locking mechanism.
Comment #84
alexpottHere's how I set up the mulitwebhead docker environment... https://github.com/alexpott/multiwebheads
Comment #85
alexpottComment #86
kim.pepperIt wasn't clear to me if this was being done only for performance improvements or to fix other issues. I know there were some performance tests done. Can we add the expected performance improvements to the issue summary?
Comment #87
alexpott@kim.pepper this is being done for multiple reasons:
I've added this to the issue summary as rationale section.
Comment #88
longwaveThis looks great, added a couple of questions.
Comment #89
alexpott