Problem/Motivation
Twig caching produces a directory structure looking like the following:
sites/default/files/php/twig
- 5915e35d823c3_container.html.twig_5ytYVUs_gdkx819cOy1xqJ2Z_
- 5915e35d823c3_datetime-form.html.twig_F5e-roY2x9_yW4qxQPvi924S5
...
These file names consist of
- The prefix, generated by
uniqueid()
everytime the available twig extensions change, which is triggered by module install,
and as such problematic on multi webhead environments. This prefix is stored inside state to be shared across all webheads, already. - The twig filename
- Protection provided by PHP file storage
The current method of clearing the Twig cache (as seen in drupal_flush_all_caches()
) is this:
// Wipe the Twig PHP Storage cache.
PhpStorageFactory::get('twig')->deleteAll();
By default (see \Drupal\Core\PhpStorage\PhpStorageFactory::get()
), it is stored in the public://
stream wrapper, specifically in public://php/
:
if (!isset($configuration['directory'])) {
$configuration['directory'] = PublicStream::basePath() . '/php';
}
So, by default, it is written to public://
, which by definition is shared across all webheads.
However, on hosting environments that use multiple web heads (such as Acquia Cloud Enterprise), the public://
stream wrapper is backed by a high-latency distributed file system. This is fine performance-wise for downloadable files (because they can be cached by a reverse proxy such as Varnish), but not for files that are required tens if not hundreds or thousands of times per second, which is the case for compiled Twig templates. Even when those files are cached in PHP's OpCache, the file system is checked to see whether the file has changed in the system, still resulting in many file system hits. If that file system then has high latency, this leads to terrible performance.
So, Acquia Cloud Enterprise, like many other Drupal hosting platforms, does something like this in settings.php
:
$settings['php_storage']['twig'] = array(
'directory' => '/some/writable/local/directory',
);
The result is that on hosting platforms that do this, this code:
// Wipe the Twig PHP Storage cache.
PhpStorageFactory::get('twig')->deleteAll();
really only clears the local directory that stores the Twig cache, on the current webhead only.
The root cause of this problem is quite simple: drupal_flush_all_caches()
assumes that the Twig PHP Storage Cache's directory is shared across web servers.
Proposed resolution
Either:
- use
$settings['deployment_identifier']
to key compiled Twig templates by — having multiwebhead hosting platforms set this to the git commit hash would then solve it (#5 (catch) + #8 (Wim Leers) + #14 (Berdir)) - add a
Twig::invalidate()
API thatdrupal_flush_all_caches()
would call, which clears the state variable we already have. This enforces a recalculation of the prefix across all webheads (#7 (dawehner) + #10 (msonnabaum).
Concerns for solution 1 are that nobody knows about $settings['deployment_identifier']
; the general response to that is that you already need to know about that anyway (se #14).
A concern that applies to both is that stale files can linger forever (#48 + #49 + #52). This is impossible to solve though; Drupal can't be responsible for deleting those stale files, because it may require a long-running process to delete potentially many thousands of files. Therefore, it must be up to each hosting platform to delete those files in a background job. #49 contains one possible approach (the one pioneered by Pantheon).
Remaining tasks
User interface changes
None.
API changes
None.
Data model changes
None.
Comment | File | Size | Author |
---|---|---|---|
#130 | interdiff-2752961.txt | 1.3 KB | dawehner |
#130 | 2752961-130.patch | 6.09 KB | dawehner |
#114 | interdiff-2752961.txt | 525 bytes | dawehner |
#114 | 2752961-114.patch | 6.93 KB | dawehner |
#112 | interdiff-2752961.txt | 1.3 KB | dawehner |
Comments
Comment #2
cilefen CreditAttribution: cilefen commentedIs this duplicating #2301163: Create a phpstorage backend that works with a local disk but coordinates across multiple webheads?
Comment #3
dsouza_rohan CreditAttribution: dsouza_rohan at Intelliswift commented#2301163: Create a phpstorage backend that works with a local disk but coordinates across multiple webheads
Comment #4
Wim LeersThis focuses on a subset of what #2301163: Create a phpstorage backend that works with a local disk but coordinates across multiple webheads is covering. It's not a duplicate.
Comment #5
catchWe should at least use $deployment_identifier to key twig templates by.
Comment #6
catchAlso general note I agree with splitting this out from the phpstorage issue - there are things like $deployment_identifer we can do that don't require changes to phpstorage itself.
Comment #7
dawehnerIn order to invalidate information across multiple webheads one needs a way to share information in an easy way. Much like we are now using the cache backend for the container, we could use a state => cache entry which is used by the files:
Would that simply work?
Comment #8
Wim LeersWhy not just deployment identifier?
What you say can work too (the random hash would be cleared in a single shared location and would therefore affect all web heads), but what's the benefit?
IMO the benefit of a deployment identifier is that you also prevent race conditions when two web heads are not updated at the exact same time (eg a few seconds apart).
Comment #9
dawehnerWell, of course deployment identifier works for existing people which are aware of it, but well, the fact that people run into the problem show that they don't know about it.
This issue is for solving the problem in a different way potentially.
At the moment the invalidation just happen on the one webhead which retrieves the invalidation. When we use cache/state we would share the invalidation between web heads automatically, so something like
drush cr
would actually work as expected.Comment #10
msonnabaum CreditAttribution: msonnabaum commentedWhat dawehner proposes is very much what I had in mind. We need to think about this in terms of correctness first.
Comment #11
dawehnerSo here is a patch for it. I talked for a bit with @fabianx about it and he seemed to be in favour of it as well.
Comment #13
dawehnerLet's quickly fix this failure.
Comment #14
BerdirBut the downside of that is that phpstorage now depends on cache/state?
We actually have a cache backend that uses phpstorage, if someone were to use that for bootstrap, that wouldn't end well :)
Yes, you have to know about deployment identifier. But if you do multi-head deployments, then you have to *know* stuff anyway. You have to know what to share, how to share, how to deploy/invalidate.
I think what most of those sites actually want for production, be it single or multi-head, is a phpstorage backend that isn't read only but ignores delete requests, because you know that it will not change unless you actually do a deployment. And then you can either use a deployment identifier or explicitly delete the files on deploy.
The problem that I see with both this patch and the deployment identifier approach is that we don't actually delete the old files. Over time, won't that become a problem?
Comment #15
dawehnerI see a potential problem in that indeed.
Mh in order to solve that properly we could introduce some prefix for the filenames which point deployment identifier or so. By that we could cleanup old files when they no longer match to the current deployment identifier.
Comment #16
dawehnerHere is the alternative approach which at least allows to use the deployment identifier.
Comment #18
chr.fritschI fixed the tests to bring back the attention to this issue.
Comment #20
BerdirFailed with random fails before.
Comment #21
anavarreAny reviewer? It's a pretty stubborn issue with multiple web heads.
Comment #22
dawehnerOne Thing I'm wondering ... does this mean we should have some default fallback for the deployment identifier to always contain the Drupal version? Maybe a dedicated method
Settings::getDeploymentIdentifier()
and use that by default?Comment #23
catchhttps://api.drupal.org/api/drupal/core!lib!Drupal!Core!DrupalKernel.php/... uses the version string and the deployment identifier.
Comment #24
dawehnerRight, given that, we should simply not comment out default.settings.php and call it a day?
Comment #25
star-szr@dawehner in other words leave default.settings.php as is, correct?
Comment #26
dawehnerYeah I would think so?
Comment #28
xjm@alexpott, @cilefen, @catch, @effulgentsia, and I discussed this issue today and agreed that it is a major bug because it reduces scalability and can lead to invaild caches, fatals, etc. on multiple web heads.
I thought #2301163: Create a phpstorage backend that works with a local disk but coordinates across multiple webheads had been fixed and was going to reference that, but no such luck. This issue was spun off from that one. I started to move this to the theme system, but it looks like the fix is actually still for the PHP storage here.
Is there any way to provide some manner of test coverage here?
Also, sounds like the patch needs work to remove the
default.settings.php
hunk (or perhaps expand the documentation for the line instead of uncommenting it).Thanks everyone!
Comment #29
bradjones1Re-rolled.
Comment #31
moshe weitzman CreditAttribution: moshe weitzman commentedYeah, I think some more documentation about the value of deployment_identifier would be nice.
I guess the site should increment it for each deployment? So, something like
$settings['deployment_identifier'] = \Drupal::VERSION. '-deploy1'
. The next time you deploy you write in 'deploy2', and so on. Seems like a hassle compared to the State solution.P.S. @anavarre et al - Acquia could use its settings.inc to automatically set a unique deployment_identifier.
Comment #32
moshe weitzman CreditAttribution: moshe weitzman commentedIs there some other cache backend that twig could/should use in a multi web-head situation?
Comment #33
dawehnerIn a good environment one could use environment variables to get this deployment identifier, see #2501901: Allow configuring the environment via an environment variable as a "discussion" about leveraging those.
Comment #34
bkosborneAs someone prepping to launch a D8 site on multiple web heads, it seems like there are just two current options to resolve this:
1. After any twig template has changed and has been deployed, SSH into each web server and clear the Twig cache:
\Drupal\Core\PhpStorage\PhpStorageFactory::get("twig")->deleteAll();
2. Apply this patch above, then establish a method where the deployment identifier variable is changed every time you deploy updates to Twig templates.
Is that correct?
Comment #35
webflo CreditAttribution: webflo at UEBERBIT GmbH commentedThat is correct.
Comment #36
bkosborneHeads up for anyone on Acquia: This patch alone won't work. In Acquia's settings.php file that's included on all sites, it sets
$settings['php_storage']['twig']['secret'] = $drupal_hash_salt
. Not sure why it does that when that's already what core does in PhpStorageFactory. Anyway, you need to unset that secret in your settings.php file so that the logic in PhpStorageFactory runs.Comment #37
MKorostoff CreditAttribution: MKorostoff commentedHey guys, just a thought, but would it be possible to make Drupal configurable such that twig cache can reside in a shared location, much like other caches? Given the option, I'd be very happy to store twig cache in a memcache server that is shared amongst all of my apache servers. Thanks!
Comment #38
MKorostoff CreditAttribution: MKorostoff commentedHere's what I don't understand about this issue: I currently see in my project a directory called sites/default/files/php/twig. In this directory is a bunch of files that sure look like they're the twig cache. This is significant because this directory is shared among all my apache servers. So my question is: if this directory isn't the twig cache, what is it? Am I understanding this accurately if I say that these files are the "compiled" output of twig templating, but the actual cache is elsewhere?
Comment #39
BerdirNope, that is the cache, there is nothing else. If they are in a shared directly, then everything is working correct. But the performance isn't great, because writing and accessing those files is not as fast as it could be. Being able to do that in a local directory on each server, especially someting that's actually tmpfs would be faster.
This issue is about supporting such scenarios better.
You can even do local folders already if you include a deployment step to clear that folder on every server on deployment. As long as you don't have a scenario where configuration changes could result in *changed* compiled twig templates. There's nothing in core that does that AFAIK, but it's technically possible.
Comment #40
MKorostoff CreditAttribution: MKorostoff commented@Berdir, thanks for the reply. So here's what I don't get: currently, on Acquia hosting they are officially recommending to ssh into each individual server and clear twig cache. So if you have 15 apache servers, for the time being, that means opening and closing 15 ssh sessions. The thing is, every acquia production instance has a filesystem that is shared by all of the web servers. So my question is: when I ssh into a server to clear twig cache, what am I clearing? If I were only clearing the contents sites/default/files/php/twig, wouldn't I just have to do it once on any randomly selected web server?
Comment #41
BerdirIf it is not in a local directory, then that is enough, yes. In fact, a normal full cache clear like drush cr will already do that as well.
Comment #42
bkosborne@MKorostoff Acquia does not store the twig cache files on the shared filesystem, instead they are on temp file stores for each web head. I believe I saw a comment in the Acquia config file somewhere where that's set saying it's related to performance.
Comment #43
MKorostoff CreditAttribution: MKorostoff commented@bkosborne thanks!
Comment #44
deviantintegral CreditAttribution: deviantintegral at Lullabot commentedI did some investigation into this a few months back, as I was thinking the same thing. I'll copy my notes here.
The options I see are, ordered from least to most complex:
If I remember right, twig template files use the hash of the file contents in their names. In other words, there isn't much of a reason to clear template caches unless disk space is a concern. After a deployment, even if old templates are on the web nodes, any template changes should result in a different hash.
For this set of sites, we ended up writing and using https://www.drupal.org/project/twig_temp and not worrying about clearing templates as we were deploying with Docker, so filling up all the disk space was never an issue.
Comment #45
deviantintegral CreditAttribution: deviantintegral at Lullabot commentedOh, and warming the template caches at build isn't an option either, even if you wrote a drush command for it, since #template can be used in render arrays to generate templates at runtime. You could warm all file-based templates, but dynamic templates would still be generated on a per-web-node basis.
Comment #46
Wim Leers#42 is correct, Acquia Cloud Enterprise has multiple webheads, and to not hit the slow (because network latency) filesystem that is shared across webheads, it stores compiled Twig templates on the local (ephemeral) storage.
#44: Thanks so much for writing up your thought process! But I'm confused why some of the options you listed even made the list. Feedback below:
We should not require the use of a shared temporary directory.
This is not possible, because in Drupal it's impossible to enumerate all Twig templates. This would be splendid though; it'd remove some deployment risks.
That is effectively what Acquia Cloud Enterprise does. Indeed the current work-around is to wipe temp on all webheads after deployment. However, an actual solution would be to do what this patch has been doing: ensure there is an identifier/hash/… that all webheads respect from a single source of truth (either
State
in the DB, ordeployment_identifier
in the code, see the discussion in #1 through #15).Let's not forget this is first and foremost a synchronization/coordination problem.
Drupal can't require memcache. We could store it in the DB instead, but that'd just mean we'd be using the DB as a cache storage. Per
\Drupal\Component\PhpStorage\PhpStorageInterface
's docs, that'd be supported. But we'd be incurring network latency for every single file. The whole point is to avoid that latency.… but the point of caching compiled Twig templates is that we don't need to hit the file system for every single template on every single request. You're right that we hash the contents, but we don't check if the original Twig template actually has been modified on every single request. And the compiled Twig templates are stored in PHP's OpCache, so we don't hit the file system at all.
The problem is that (as the IS says) after a deployment,
drupal_flush_all_caches()
is called, which callsPhpStorageFactory::get('twig')->deleteAll();
, but that only affects the storage on the current machine. All other webheads still have the compiled Twig templates, and are blissfully unaware that Twig templates may have been updated.So what we need, is some hash in
State
or thedeployment_identifier
insettings.php
to be modified for every deployment, so that coordination across multiple webheads is possible.(Getting back into this, so I hope I didn't make any mistakes/am not misremembering certain things.)
Comment #47
Wim LeersNow, let's update the IS. Because it's currently biased towards Acquia Cloud Enterprise: it portrays the problem as something that affects all Drupal 8 sites/hosting, but that's not true.
Also, let's list the proposed solutions that can work, and the concerns raised against them.
Hopefully that gets this issue back on track again :)
Comment #48
Berdir#46 looks correct to me. Only comment is that IMHO, if the infrastructure supports it, I would recommend some kind of deployment job that clears the folder on all servers over drupal providing a changing identifier.
Because with such an identifier, the old files will not be removed until you do a cache clear in drupal (and then only on one server), so there is a risk that you might get a considerably large amount of files on some servers, and that in turn can slow down accessing those files and writing new ones.
That apparently seems to be a hard problem to solve (not sure why), platform.sh enterprise does also not support that yet.
Comment #49
Wim LeersGood point, added that concern to the IS.
And funny enough, the Acquia Cloud ticket was updated less than an hour ago with information about an alternative solution, pioneered by Pantheon: https://github.com/pantheon-systems/drops-8/pull/182/files. It does this:
As part of that approach, one could also delete the "old temp" on every deployment. It wouldn't introduce problems because it's not shared across webheads anyway, so there's no coordination problem. And it'd solve the "ever growing file system usage" problem described in #48 AFAICT.
Thoughts?
Does this make solution 2 in the IS (introducing
Twig::invalidate()
, which updates a hash/key/ID stored inState
orCache
) more appealing again?Comment #50
moshe weitzman CreditAttribution: moshe weitzman commentedI support solution #2 for the reasons dawehener mentions in #9. It works for the most people as they would expect. I'm not overly bothered by the fact that invalidation relies on State.
Comment #51
pwolanin CreditAttribution: pwolanin as a volunteer and at SciShield commentedminor point, related to other fixes we made in the Twig system, we don't necessarily want a random value to check against, but rather a unique value each time (e.g. microtime() or uniqid() ), since a truly random value can repeat (in theory).
Comment #52
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedIn the Pantheon solution, the `ROLLING` portion of the `PANTHEON_ROLLING_TMP` environment variable is simply the deployment identifier. As was previously mentioned, deleting the stale twig cache files is not critical, but not deleting them at all causes them to pile up over time. Pantheon solves this problem by deleting stale rolling temporary directories slowly over time in a backend scavenge process. The scavenger can look up the deployment identifier for each binding directory, so it knows to skip the directory that contains the active set of cache files. It also avoids deleting any directory that has been modified too recently, just for good measure. Drupal sites with very complex custom themes can have a very large number of twig cache files, which can sometimes take a significant amount of time to delete. Keeping scavenging out of the deployment process is therefore beneficial, as that's one less thing for deploys to have to wait for.
It seems to me that solving the multi-head cache scavenge problem in Drupal is not the most practical solution. The services that are managing the different heads is better suited for cleanup; deleting a lot of twig files as part of a web request, similar to the way poor man's cron works for sites without cron, would be even worse than doing it at deploy time. If code to do this is added to core, it should be an opt-in service for folks to turn on. Since single-head already works, perhaps multi-head could be solved in documentation only.
@deviantintegral's https://www.drupal.org/project/twig_temp will suffice for some use cases as well.
Comment #53
Wim Leers#50: I don't know
PHPStorage
well enough to know/predict what the consequences are of making it dependent onState
. Hopefully @Berdir can chime in.+1!
Comment #54
Wim LeersI just noticed that #2569119: Use Crypt::hashBase64(), not hash('crc32b') or sha1 for placeholder tokens made some changes to
TwigEnvironment
. The changes (and the code) are a bit confusing, buttwig_cache_prefix
that is recalculated (and set touniqid()
) whenever:\Drupal\Core\DependencyInjection\Compiler\TwigExtensionPass
discovers new or updatedtwig.extension
-tagged services during that container rebuild)twig_extension_hash_prefix
State
entry does not exist (either because it's a fresh install, or because this key-value pair was deleted)(The accompanying change record: https://www.drupal.org/node/2833264)
This means that proposed solution 2, which includes the introduction of
Twig::invalidate()
, could be as simple as this:Patch attached.
(I think this is the change that @pwolanin is referring to in #51?)
Comment #55
Wim LeersThe
twig_extension_hash_prefix
key is a very confusing misnomer, because it contains more than just the extension hash prefix.Let's at least move that to a class constant: that prevents bugs to typos, and allows us to make it clearer.
Comment #56
Wim Leers#48's concern that files can linger, which was confirmed and elaborated on by #52, has been added to the IS by me in #49. I attributed it as a problem for solution 1. But really, it's a problem for both solutions.
IS updated.
Comment #57
Wim LeersThe patch in #54/#55 implements solution 2. But we could actually have it implement both solutions. Essentially by changing
to:
Because the current solution still requires
drupal_flush_all_caches()
to be called to get updated Twig templates. We shouldn't require that.In other words:
/admin/config/development/performance
for multiwebhead setupsAlso relying on the deployment identifier would fix that.
Thoughts?
Comment #58
Wim LeersAlso, per #56, we'll need documentation for multiwebhead hosting platforms, so it's clear what Drupal expects them to do/provide. And undoubtedly we want a change record to announce that to the world.
Comment #59
dawehnerSo does that mean we have an uncached state query on every request in HEAD?
We should document here how this helps multi-webhead deployments, IMHO.
Comment #60
Wim Leers#59:
Comment #61
dawehnerSorry for not commenting on the actual fix. I think it solves the problem perfectly!
Comment #62
Crell CreditAttribution: Crell at Platform.sh commentedHi, representative from a multi-webhead host here. :-)
If I'm following the proposal here properly, it's just a question of how we generate a unique value for each deploy/change to use, right? Either use a host-provided value or generate a random one on each cache clear (which would also happen after a deploy and thus pick up that use case, too)?
We (Platform.sh) have a git tree hash value we can trivially map into the settings array so option 1 would be fine with us. My concern is that it assumes that the templates will only ever change on a deploy; while in a logical world with most frameworks that's the case, it would preclude user-editable Twig files that get compiled like any other Twig file. I'm not sure if anyone is doing that yet, but it's conceptually something that I know we discussed on and off many times in the Panels/SCOTCH world. (Think, basically, Contemplate module but not terrible. Generate a Twig file based on user config, compile it, the compiled version gets used so there's no performance overhead and no PHP is stored in the DB so there's no security issue.)
That would imply option 2 is more robust, as it still leaves the door open for configuration-based changes to the compiled twig templates. Leveraging the deployment identifier as part of that seems fine though; in fact I'm going to go add that value to our standard project template now that I know it exists. :-)
The other question is cleanup. I agree that blocking a cache clear or delete on deleting many files, possibly over a network connection, is a bad idea. However, we shouldn't just punt it to hosts unless we give hosts a clear way to do so. If I read the current proposal correctly (and I may well not be), the prefix is just a file prefix; that means the cleanup process needs to "delete all files in $dir except those that begin with $value_in_the_state_system"? That seems error prone and inefficient, and the cleanup process would need access to Drupal anyway to look up that value. (Since, per above, commit hash alone is insufficient.)
Using the ID as a directory name rather than file prefix would help a little, but ideally we'd still allow an outside process to do the cleanup without needing to access Drupal systems. If we need that, then we may as well let Drupal do it itself in a cron task.
'course, now that I think about that, though... If the whole point is to let each web head have the cache on a *local* disk, is it really going to be that slow? Slow enough that we can't just do it inline on deploy and eat the extra second or two to rm a directory? (And for changes coming from config without deploy, meh, disk is cheap.)
I may be missing some thing here...
Comment #63
dawehnerThe more I think about it, the more I like wims solution. I updated the issue summary to explain what is actually the status quo and how the state solution solves the problem, especially because it doesn't worry about the cleanup step, given the reasons described in the issue summary. Not sure anyone would care about it, but in case someone does, there is always room for another issue, solving that here would just derail the solution we have.
Here is a change record: https://www.drupal.org/node/2879963
For a second I was wondering whether we should use
\Twig_Environment::clearTemplateCache
but it turns out, cacfb069b2e65d5487489677238142b464135b40 in twig removed the capability , see https://github.com/twigphp/Twig/issues/1879 as a comment.Comment #64
BerdirYeah, since we have the state key already, makes sense to build on top of that.
However, the patch completely removes deleting the actual files, we can't rely on some infrastructure process existing by default? That's also going to be annoying for local debugging when you need to debug something in a generated twig file?
What about keeping the default delete for single-head server as a default but provide a subclass that deletes the state key instead?
Comment #65
dawehnerWhat about doing both for now. A multi-webhead project can still implement their own subclass, which does something better, if they want?
Comment #66
Crell CreditAttribution: Crell at Platform.sh commentedQuestion: How likely is a cache clear without redeploy to change a cached Twig file?
I noted above the potential for config-driven templates, but I don't know if anyone is doing that currently. Anyone else know?
Here's the challenge I see, as a host:
1) If a cache clear does not change compiled twig files, then the deploy ID is sufficient and totally easy. In which case the state ID is kinda irrelevant.
2) If cache clear does change compiled twig files and cache clears are frequent compared to deploys, then any clean up script MUST leverage the state ID and thus MUST run from within Drupal.
3) If cache clear does change compiled twig files and cache clears are not frequent compared to deploys, then it's safe enough for hosts to ignore the extra files and fall back to option 1.
So then the two questions are:
1) Do cache clears change compiled twig files (viz, is that a thing we want to support)
2) Are cache clears sufficiently frequent compared to deploys that we care about them being separate events?
Comment #67
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedCleanup in cron would be fine. Providing a default cleanup algorithm that can be overridden would also be fine. If supporting downloading twig templates or in-browser twig editing et. al. is desired, then managing the deployment identifier inside Drupal makes sense.
I am wary about doing cleanup at the instant that the twig files are invalidated. If another process on the same web head was accessing the twig cache at exactly the same time, then you would thrash the cache. At the end of the day, the inefficiencies of this sort of thing are not going to be too great; however, in any cleanup scheme, would still behove you to avoid deleting files that might still be in use (i.e. by a process still holding the old deployment identifier in a local or static variable).
Comment #68
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedPosted before I saw #66. From my standpoint, those questions are still open. However, it seems that the cost of managing the state ID from within Drupal is not great, whereas NOT managing this ID in Drupal prevents these scenarios from working smoothly on multi-webhead hosts.
Comment #69
bkosborneJust a note that if the proposed patch goes in, then ALL drupal 8 hosts (not just multi-head) would need to implement a solution to clear old files from the twig cache directory, right? That seems like a pretty big change.
Comment #70
dawehnerCould we maybe continue to clear the files like now?
Comment #71
dawehner@bkosborne
You are making indeed a great point to be honest.
With #70 we would
a) Not make the behaviour worse for environment with one webhead.
b) Fix the deployment problem with multiple webheads.
c) Make it not worse for the multiple webheads environment, aka. keeping files around potentially. We can then continue discussing that problem in a follow up, as well, its a orthogonal problem IMHO. Multiple webheads environment could then also solve the problem independent from Drupal.
Comment #72
Crell CreditAttribution: Crell at Platform.sh commented@dawehner Any thoughts on the questions at the end of #66? Speaking as a host those are the main questions I have that would drive how we'd do the cleanup process.
Comment #73
dawehnerIt does when you change the underlying file. Cache clear is an obvious thing people execute constantly on development environments.
I would say not on production environments. Just deployments should clear caches like that, unless someone just installs modules as they want.
Comment #74
anavarreSpent some time trying to carefully document a step-by-step to reproduce.
In dev (or in any env with one web node only), make a simple change to
page.html.twig
such as you get:<div id="page-wrapper" style="color:red">
Confirm you see the changes on the page. If not, rebuild caches / enable Twig debug mode.
Now that you've confirmed the change works, push to prod or in any env with 2 or more web nodes (if Twig debug mode is enabled, turn it off).
Without the patch in #71
1) Ensure your code is here on e.g. 2 different web heads:
2) Confirm the code isn't there on both web heads, in the compiled Twig files directory (exact location depends on the web host):
3) Invalidate the cached Twig template files only on one of these two webs. E.g.
d8@web-1:$ drush @d8.prod --uri=http://site.tld/ ev '\Drupal\Core\PhpStorage\PhpStorageFactory::get("twig")->deleteAll();'
4) Confirm the twig directory is empty for that specific web node only. Immediately after, visit the site and confirm the text is red. If not, then you've hit the web node for which the Twig compiled files haven't been rebuilt.
5) Rebuild caches on both web nodes.
6) Confirm the template modification is now showing up on the server on at least one of those web nodes.
My understanding is you need to manually invalidate the Twig cache for each web server in your stack for changes to show up for all.
With the patch in #71
Repeat all steps, and change the color to make sure you see a difference (e.g. green). You shouldn't have to do anything and changes should appear immediately without having to manually invalidate Twig caching.
Unfortunately, even after multiple hard refreshes in the browser and comparing as auth/anon, I'm still not seeing consistency in both web nodes. E.g.
So unless I'm missing something as to the expected behavior or there are steps I haven't done correctly, this is still not working. It's also why I was under the impression leveraging the
deployment_identifier
might be best. Could someone please clarify?Setting back to Needs work for the time being.
Comment #75
dawehner@anavarre
Is it just me that you have render caching on top of that involved? Maybe page_cache or others are still problematic?
Did you also actually triggered the flushing of the caches / manual invalidation of the template? Do you mind describing how you actually deploy?
Comment #76
anavarreI'm testing with a site set up to have realistic production settings, that is with
page_cache
/dynamic_page_cache
enabled and a load balancer with Varnish in front of it with max-age set to e.g. a day.Usually, what works is to invalidate the Twig cache on each web node and this is what I expect this issue can help solve. Imagine in a setup with 10 web nodes, this means the below command would have to be run from each web.
$ drush @d8.prod --uri=http://site.tld/ ev '\Drupal\Core\PhpStorage\PhpStorageFactory::get("twig")->deleteAll();'
Deploying happens by pushing code via a UI, but what it does under the hood is it creates a new tag and pulls that tag on each of the production webservers in the stack. There's no specific deploy 'hook' like clearing caches or anything fancy.
Let me please know if you need me to clarify things any further?
Comment #77
dawehnerWell, at least the solution for now is relying on the idea to have some form of deployment step. This deployment step would call
\Drupal::service('twig')->invalidate();
on any of the webheads.On top of that though you still have the problem of page caching and all the other things. Conceptually if you think about it, any kind of code deployment (whether its twig files, or PHP code) could change your renderered output, so you might want to invalidate the
'rendered'
tag.So in other words, ensure your deployment does those two things.
Comment #78
anavarreThis is what concerns me. IIUC, invalidating the
rendered
tag means invalidating each and every URL that would have this tag in its cache tags list. E.g. any node, view, the frontpage, etc. The point of cache tags, as I understand, is to make sure you don't have to rebuild caches anymore (unless when truly required) and only what needs to be invalidated now gets invalidated.If we're clearing out the
rendered
tag we're basically telling Drupal (and Varnish if we're using Purge and a purger) to flush out a ton of things.Is this really the best approach? Am I not understand something?
Comment #79
dawehnerWell at least for know cache tags are just about content and configuration not about code. If you think about it, each piece of rendered output would need to know every template which was involved along the process.
This though would just cover the deployment of twig files, but not the deployment of any php file change. There is no sane way to determine which code is involved in generated which part of the page.
I think this discussion is totally out of scope though here
a) There is a concrete problem we need to solve: deployments leave up wrong templates, with NO way to solve it properly right now
b) This problem would have to be solved, beside some additional magic.
Comment #80
moshe weitzman CreditAttribution: moshe weitzman commented@anavarre - before this fix, you could do a cache-rebuild and still get old templates if your rebuild happenned on a different web head. With this change, assuming it works, a cache-rebuild will be enough.
The goal here isn't to avoid a cache-rebuild on each deploy. Thats still needed, unless your code change is trivial (i.e. doesn't affect any rendered output).
Comment #81
anavarre#80 - I understand but can't come to terms with doing this, because it means we'd have to warm up caches always, which can impact high traffic sites. And Drupal 8 caches so much compared to D7 it's a shame we can't only invalidate the cache that needs to be invalidated. That being said, I agree it'd already be an improvement compared to the current state of things, especially when you have to manually invalidate caches on 10 web heads.
Here's another round of tests with the patch in #71, still. On both live web heads I made sure to run
drush cr
and clear Varnish to make sure I'd start fresh.Then, I queried each web node's frontpage to make sure I'd have the correct Twig template generated:
I then made a change locally (changed color from red to green) and pushed to dev. Then, deployed to prod and made sure to query both web nodes and hit the frontpage. Confirmed the color is still red at this point.
Now, ran
drush cr
from one of those web nodes. With Drush aliases it shouldn't matter which one. Querying each web node's frontpage and grepping forpage-wrapper
does return the correct color (green). The filesystem, however doesn't exactly look like it has wiped outdated template files for the web from which I didn't rundrush cr
. E.g.Is this the expected behavior? Is Drupal smart enough in this instance to look up for the newest template and not serve one or the other randomly? Also, while running a
drush cr
with this patch might solve the issue for authenticated users, we'd still have to invalidate therendered
tag so that Varnish cache gets invalidated too, IIUC?Also, I did try to replace
drush cr
bydrush @d8.prod ev '\Drupal::service('twig')->invalidate();'
to try and resort to less expensive operations but it didn't work in my testing.Comment #82
moshe weitzman CreditAttribution: moshe weitzman commentedYes, old unused compiled twig files are left behind. Drupal and twig cache are smart enough to ignore old files. We discussed this a bunch in this issue and we are leaving this cleanup for hosts to figure out (for now). It would be fine to just wipe this dir on each web head every n months.
Correct.
Comment #83
anavarreYes, I remember the cleanup was discussed in #52 but I was more wondering about Drupal picking up the correct template and I'm glad Drupal/Twig are smart enough to handle that just fine.
As to the
rendered
tag, would it be in scope for this issue to account for its invalidation? Because it won't happen via a cache rebuild so I'm wondering what are our options here.Comment #84
moshe weitzman CreditAttribution: moshe weitzman commentedI see clearing of all cache bins in drupal_rebuild(). That would clear the 'rendered' tag, unless I am missing something. This gets run during drush cache-rebuild, just like it does for drupal's rebuild.php
Comment #85
anavarreI understand it'll clear Drupal caches but not the corresponding stored HTML responses in Varnish. So, with this patch, a typical deployment workflow would be:
drush cr
rendered
tagThat's this last bit I'm struggling to see how we can best accommodate for, if at all possible.
Comment #86
anavarreDiscussed this with @dawehner in IRC and I'll follow his advice to create a follow-up to discuss the deployment process, which is not strictly in scope for this issue, thus the mismatched expectations. Also setting back to Needs review for this reason.
Here we go: #2888082: Deploying Twig template changes is too expensive: it requires all caches to be completely invalidated, as well as all reverse proxies
Thanks for your input and @dawehner and @moshe weitzman - This does help with framing the discussion.
Comment #87
Berdirdrush cc render clears the rendered cache tag
Comment #88
Wim LeersMy last comment at #60 dates from just before Acquia Build Week. About time I respond here again!
#61 + #63: yay!
#62 + #64—#68: I agree with @greg.1.anderson that it make sense to provide a default that can be overridden by hosting companies that have multi-web head setups. Also agreed that doing cleanup immediately is a very bad idea, because it will block the response.
#69 + #70: no, we probably want to stick with the current cleanup behavior by default. So +1 for the change in #71. Although I'd like to see that moved to a separate
cleanupDisk()
method or something like that, because 100% of sites will want to have the$this->state->delete(…)
line, but X% of sites will want to overridePhpStorageFactory::get('twig')->deleteAll()
.#74 + #75: yep, deploying Twig changes also requires Render Cache (fragments) + Dynamic Page Cache (skeletons for any user) + Page Cache (entire page for anon user) + Varnish (same) to be invalidated. @anavarre: are you sure that you were triggering
drupal_flush_all_caches()
, by e.g. using the "Clear all caches" button at/admin/config/development/performance
ordrush cache-rebuild
akadrush cr
? Because that'd also have cleared Render Cache + Dynamic Page Cache + Page Cache. It's never going to be automatic, because that'd require scanning the file system on every request for Twig changes, which is too costly.#78 + #79 + #80 + #81 + #85 + #86: this is indeed out of scope here, for that we now have #2888082: Deploying Twig template changes is too expensive: it requires all caches to be completely invalidated, as well as all reverse proxies.
#81: thanks for confirming it works!
Comment #89
Wim LeersI think what remains to be done here is:
TwigEnvironment::invalidate()
can effectively be overridden by hosting providers. See @greg.1.anderson's reasoning in #52 & #67. And test coverage for this.$settings['deployment_identifier']
, which would be trivial as I explained in #57. Crucially, this would remove the need fordrush cr
after deploying Twig changes!It removes the need for
drush cr
… but only if the output generated by the Twig template is not cached in Render Cache, nor in Dynamic Page Cache nor in Page Cache. That's a lot of ifs.We could ensure that those are also updated by keying cache entries for Render Cache/Dynamic Page Cache/Page Cache on the deployment identifier as well. (And we could even expose a
Drupal-Deployment-Identifier
response header that Varnish could key on, which would mean that Varnish would be instantaneously updated as well.)But all that … seems like it's making the scope of this change too large. Therefore I think it's probably better to defer that to a follow-up. Since this was my proposal and I'm now arguing against it, I think most will agree it should be a follow-up.
So I think that only leaves points 1+2.
Thoughts?
Comment #90
dawehnerHere is a test at least.
Comment #93
anavarreComment #94
jofitz CreditAttribution: jofitz at ComputerMinds commentedI'm unable to replicate the test failure locally so I have corrected the coding standards errors and a couple of whitespace issues and hoping for a miracle...
Comment #95
Wim LeersThanks, dawehner and Jo Fitzgerald! That addressed #89.1. I've attached a test-only patch, to prove that the test coverage is testing what we want. (And yes, obviously we can't test a multiwebhead situation.
That still leaves the test coverage for #89.2.
Plus feedback (agreement or disagreement) with my proposal in #89.3
Comment #96
dawehnerI actually agree. Once we add the deployment identifier to everything is there actually a difference to clearing all the cache in the first place? This is a tricky judge to be honest. Here is the follow up: #2908460: Identify all areas which needs to key their cache by the deployment identifier
#89.2 ... How would you test that, asking one example hosting environment to actually implement such a strategy? I mean the fact that one can override a method doesn't really prove anything.
Working on the change record right now.
Comment #97
dawehnerComment #98
dawehnerI started adding some really short documentation: https://www.drupal.org/docs/8/theming/twig/deploying-twig-changes
Comment #100
Wim LeersYou just created https://www.drupal.org/node/2908461, but apparently you already created https://www.drupal.org/node/2879963 in May :P I obviously missed that in #89.
RE: #89.2: I don't know yet. But right now it's not really overridable — the hosting provider would have to override or decorate the
twig
service with their own. I'm not sure yet what the answer is, but @greg.1.anderson's reasoning was pretty convincing. Perhaps we can get his thoughts. EDIT: pinged him via Twitter: https://twitter.com/wimleers/status/907890831075495936.Comment #101
dawehnerLol, I will remove the old one, its worse.
Well, that's why one can ship services.yml files in sites/default and also add additional ones via a settings.php.
Comment #102
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedRegarding #100, Pantheon currently uses a custom service to create a lightweight hook into Drupal 8's cache clear mechanism. This is done as follows:
settings.pantheon.php:180
This adds the PantheonServiceProvider iff that class exists. We conditionalize adding this, although Drupal nicely ignores the class if it is not available. Our service provider registers a cache listener on the container, and tags it cache.bin. The Drupal cache manager collects all of the services with the cache.bin tag in the ListCacheBinPass class. When it is time to do a cache clear,
deleteAll()
is called on all of the cache bins. It is, perhaps, not the initial intention of this API to be used to clear the edge cache, but it does serve us for this purpose, which increases the UX on a Drupal Cache clear without requiring us to have a Pantheon module.It would be nice if the Drupal twig system could also allow the
invalidate()
operation to be hooked without a module, either by way of using some$GLOBALS['conf']['drupal_twig_invalidate'][]
list, or more generalized with a tagged service in the DI container. This would allow hosting providers to alter the behavior of Drupal without requiring database or CMI changes (e.g. to enable a module), which we prefer to not have to manage.Comment #103
Wim LeersThanks for that detailed feedback, @greg.1.anderson, very much appreciate it!
@dawehner, what are your thoughts on Greg's suggestions?
Comment #104
dawehnerI don't fully get why its not enough to be able to just override the twig service. This should be totally possible from within your service provide, which you already have.
Comment #105
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commented@dawehner Yes, that sounds sufficient.
Comment #106
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commented@dawehner On further reflection, I have one potential concern with simply overriding the twig service. If the suggestion is to replace a singleton class in the DI container, then the limitation there is that the platform would then be incompatible with any module or site that also tried to replace the same service. Perhaps the utility of doing this is so low that it can be presumed that only a hosting provider would ever do it. I don't really have enough twig background to make that determination; it might be fine. A specific extensible hook would be the safer design decision, though.
Comment #107
dawehner@greg.1.anderson
Can't you always decorate the existing service?
Comment #108
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commented@dawehner If I understand Symfony-style decoration correctly, the new twig service would receive the existing service via injection. The new twig service would have to implement all of the methods of the service interface, and call through to the corresponding methods in the existing service. The disadvantage here is that we have to alter this code every time the twig service interface changes. This means that the hosting provider has to do more work, e.g. if there are multiple versions of the twig service interface that need to be supported at the same time.
I was instead imagining replacing the twig service, in which case the new twig service could extend the standard twig service. This is better in some ways, but I guess that decoration is probably the better choice between these two options.
Having a simple `invalidate()` interface with a minimal API that a hosting provider could register with would be even more stable, though, so that would be my first preference. Decorating the existing service would be workable, though.
Comment #109
dawehnerWell, new methods are at least not added in patch releases, I think the amount of effort you need to potentially have there is okay. Maybe I'm a bit naive here ...
Comment #110
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedYes, it is completely tractable from a technical standpoint. However, bear in mind that hosting providers support multiple minor versions of Drupal 8; not all of our customers upgrade, even for EOL versions, so these variants pile up. This work then must be duplicated across all hosting providers who want to manage the twig cache. Given that, it seems justifiable to add another hook for this.
We'll adapt if that isn't the final outcome here, though.
Comment #111
Crell CreditAttribution: Crell at Platform.sh commentedSpeaking for Platform.sh, I'd also favor a more targeted hook/extension point/tool/thing over replacing the whole Twig service. That feels like a sledge hammer, and likely something that all the major hosts would end up doing *slightly* differently for no good reason.
Comment #112
dawehnerI'm wondering whether we could have a way to set callbacks which should be called in invalidation. This would require nothing complicated like tagged services etc.
Service providers are able to add a callable, so this should allow hosting providers adding their own invalidation functions.
Comment #114
dawehnerOops, no useful default value.
Comment #115
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedI think #112 would work fine, and sounds like a good idea. Pantheon does add an autoloader, so we could provide an object that implemented an interface. Supporting callable would still be nice for allow for injection without the (small) overhead of an autoloader.
Comment #116
dawehnerIt would be nice to get feedback from the theme subsystem maintainers on this particular issue.
Comment #117
alberto56 CreditAttribution: alberto56 at Dcycle commentedThis patch helped me get out of a stale cache situation on an Acquia-hosted D8 site and works perfectly, thanks! +1
Comment #118
joelpittetThis looks very good.
Is this new public interface needed
addInvalidateCallable()
? And if so, should it get more context passed to it's callable like the twig service it's acting on?Comment #119
dawehner@joelpittet
I understand why @greg.1.anderson asked for something like that. On the other hand I think conceptually this is an existing other problem in core on addition to the main problem that we can't clear twig cache at all. Given that we could defer this callback thing onto another issue and get this issue itself quicker committers. Any thoughts?
Comment #120
joelpittet@dawehner, I think we should defer that
addInvalidateCallable()
API addition for now to a follow-up. I think it may be a good idea to add, just would like a bit more thought put into it.Would it be ok to remove and RTBC without it @greg.1.anderson?
Comment #121
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedIf I am not mistaken, then the purpose of #112 is to implement the solution proposed in #7. In terms of converting a single-head solution into a double-head solution, what hosting providers need is:
1. The ability to put the twig cache files in a completely different directory each time they are cleared.
- When our scavenger deletes stale cache files, it can just delete entire directories at a time.
2. The ability to know which directory the current cache files are stored in.
- When scavenging stale cache files, we want to skip the directory that is active.
- We currently do this by forcing the directory calculation used in 1 so that we do not need to query Drupal.
- We could also potentially do this simply by examining the creation date of the cache folder, and presume that the newest is the active one.
3. The ability to stop Drupal from deleting the files in the twig cache
- If we have a scavenger, then we don't need to spend time during the web request to delete large directories.
I don't think we really need to pass more information -in- to the callback; what we want is to give the callback more control over the operations we need to be able to alter. I missed this distinction in #115.
Regarding #119 / #120, the current patch makes the cache invalidation situation better, and does not make the multi-head situation any worse than it is today. I therefore think it would make sense to commit here without the callback, and provide more control for multi-head in a follow-on issue.
Comment #122
dawehner@greg.1.anderson
Thank you for your great feedback and explanation of the situation. Highly appreciate that someone from a hoster takes the time and analysis these problems fundamentally!
@joelpittet
Can we get a subsystem maintainer review? :)
Comment #123
greg.1.anderson CreditAttribution: greg.1.anderson at Pantheon commentedTo be clear, I'm +1 on committing without the callback, or +1 on continuing to work here to make the callback satisfy the needs in #121, and -1 on committing with the callback in its current form. I made a mistake in #115 when I said that the callback would work for us.
Comment #124
dawehner@greg.1.anderson Do you want to roll the patch without the callback?
Comment #125
Skarlso CreditAttribution: Skarlso at Cheppers for Acquia commentedHi.
Chiming in from Acquia here. I'm not really a twig, or Drupal developer per say, I'm a Cloud Engineer, who works with Drupal only recently.
I saw comments here saying that Twig files can be massive, like thousands of them, thus clearing them out might take some processing time.
If I understand this correctly, and read the patch correctly, without much Drupal knowledge ( I'm in the process of acquiring that ), we'll be able to register a callback to the invalidate event that we can call which basically does something like:
rm -fr /mnt/tmp/whatever
Is that a correct understanding?
If so, that callback could be a parallel task which does the work, but than comes the concern that other entities might be trying to access that at that moment meaning we'll have to have some kind of locking on it.
Is that right?
Comment #127
webchickSounds like a "needs work," for re-rolling the patch without the callback.
Also, looks like we have tests now, so removing that tag.
Comment #128
yogeshmpawarComment #129
yogeshmpawarComment #130
dawehner@joelpittet was okay with the patch, and without the callback. Here is a reroll.
This might be possible in the future. With the current version of the patch you would need some logic on the filesystem to clear out files, basically anything which is older than the last deployment, which you should have somehow available as information.
Comment #131
anavarreTo make sure I understand this correctly, are you suggesting we go down this route for now (leave it up to web hosts to wipe Twig cached files on code deploy) and try and improve this in a follow-up? If that's the case, what's left to be done here, then (besides reviews)?
Comment #132
dawehnerYes you are right. There is nothing to be done here beside reviews.
Comment #133
dawehnerComment #134
Anonymous (not verified) CreditAttribution: Anonymous commentedI do not know whether this was discussed before. But when the user switches from PHP 5 to PHP 7 (Drupal already recommends doing this), it can leds to white screen.
Because twig cache can use code for "Twig C extension", that exists for PHP 5 on many hostings, but not for PHP 7.
This is not an obvious point, because the error message does not directly indicate the "Twig C extension" absence. But if the usual "clear cache" action to included clear twig cache, this would help many users.
Examples:
Comment #137
lauriii@vaplas that seems like a separate issue from this. Can you open a new issue for that so that it can be discussed there?
+1 also for moving invalidate callables for another issue.
I added information from #131 & #132 to the change record since it seems like a relevant question for anyone interested of the change record.
I also updated the issue credits.
I don't see any other open concerns. Changing this to RTBC since the approach looks good for me as well. 🚀
Comment #138
Anonymous (not verified) CreditAttribution: Anonymous commented@lauriii, sure. #2887124: PHP Fatal error: Call to undefined function twig_template_get_attributes() already exists for #134 point.
Edit:
I also thought that the current patch is already solving this problem, is not it?
Comment #139
alexpottWe need to update
drupal_rebuild()
too in utility.inc.Comment #140
alexpottComment #141
lauriii@alexpott good catch!
Comment #142
dawehner@alexpott
I don't think so.
drupal_rebuild()
calls todrupal_flush_all_caches()
which calls to\Drupal::service('twig')->invalidate();
as part of this patch.Comment #143
alexpottAdding more credit.
Comment #144
alexpottCommitted ada497a and pushed to 8.6.x. Thanks!
We might think about committing this to 8.5.x since it is all addition and there is not much BC concern but I didn't do this because it is adding a new method.
Comment #146
anavarreThank you so much for working on this issue everybody! It's a huge win for high-traffic sites running on multiple web heads.
Comment #147
Wim LeersI still can't quite believe this finally landed. 😲 🎉🎉🎉
Clarified CR a little bit.
Would be great if Pantheon, platform.sh and Acquia Cloud all would share their multi-webhead deletion strategies for the locally (per-webhead) stored Twig compiled templates. They all need to solve this problem. And it's better for the entire Drupal ecosystem if they all solve it in a consistent manner.
Comment #148
webchickGREAT work everyone! :D
Comment #149
joelpittetSorry, to rain on the party.
drush cr
seems to be triggering:Traced it back to this change and it looks like
\Drupal::getContainer
isNULL
. Do we need to have a container built for this?Comment #150
alexpott@joelpittet yep you're right.
Comment #152
alexpottCommitting #130 since this was introduced by #141 because of my feedback. Will open follow-up issue about utility.inc and repeating work in drupal_flush_all_caches().
Committed fa5df47 and pushed to 8.6.x. Thanks!
Comment #154
alexpottOpened #2969737: drupal_rebuild() does unnecessary work because it calls drupal_flush_all_caches()
Comment #155
Anonymous (not verified) CreditAttribution: Anonymous commented@ademarco also created issue about #149, but apparently forgot to add it here
Comment #156
joelpittetThanks for the quick fix @alexpott!
Comment #157
Fabianx CreditAttribution: Fabianx as a volunteer commentedHappy this landed, but I am still of the opinion this should have used cache tags instead.
However cache tags could be used in addition as the main thing is the addition of the invalidate method.
e.g. my proposal for a follow-up:
- Inject cache_tags.checksum
- Use 'twig-templates' cache tag
- Calculate the checksum
- Use the checksum as "cs[NUM]-" prefix before the actual uniqid prefix
- Use Cache::invalidateTags(['twig-templates']); as invalidation method instead of deletion of the state
- Hosting providers can just watch for the invalidation of 'twig-templates' cache tag and do a clean-up job for the previous checksum
- Hosting providers can check if the checksum is valid during garbage collection
- Even without monitoring garbage-collection is dead-easy as checksums for an individual cache tag will currently be always increasing
Does anyone want to open a follow-up issue for that?
Comment #158
dawehnerPlease do it :) It doesn't mean you have to work on it :)
Comment #160
anavarre@Fabianx did you ever file this issue? If yes, could you please link it here?
Comment #161
Anonymous (not verified) CreditAttribution: Anonymous commentedDone. #2975479: Improve clearing the Twig cache
Comment #162
anavarreThanks @vaplas!
Comment #163
Wim LeersThe CR says:
Created an issue for this: #2979669: Follow-up for #2752961: automatically deleting compiled Twig templates across multiple webheads. I think @greg.1.anderson's proposal in #121 is feasible.
Comment #164
joelstein CreditAttribution: joelstein at On Fire Media commentedIf somebody needs a quick solution for Drupal 8.5, this works for me via drush:
drush php-eval "\Drupal\Core\PhpStorage\PhpStorageFactory::get('twig')->deleteAll();"
Comment #165
Elijah Lynnre #164: I couldn't get this to work when changing twig.config debug: false/true. `drush cache-rebuild` still works though, was just looking for a faster way.
Comment #166
geek-merlin