Problem/Motivation

Drupal core ships with cache tags support - awesome.
To ensire that tags for an item were not invalidated on cache read a CacheTagsChecksumInterface service is used (eventuially) and D8 core provides such a service as a DB variant.

My issue is with the cache tags sub-system internals. As it is implemented, the list of cache tags on the platform will grow endlessly due to the DatabaseCacheTagsChecksum implementation.

In the following scenario with a highly-volatile custom entities - add 100k instances, delete them, add new 100k back.

The system will end up with a 200k cache tags in the table, 100k of them will not be used again ever. They will just stay there and clutter the database and cause overall slow-downs. Imagine when this process continues for a while...

Even after a full cache clear (that happens only rarely) all tags are still kept there.

In my case the cache tags is the highest throughput and biggest time consumer as a DB query in the whole of system, even though it's fast on average. I have around 40-50k valid entities in the system and around 120-130k cache tags in the table.

I think this problem affects only the DB implementation, as Memcache and Redis (if they have implementations on the interface) will scale in O(1) compared to Log(N) based on the amount of data in the system. On top of that they have a robust garbage collection mechanisms in case of memory pressure. SQL databases have none of that.

Proposed resolution

Can we have the list of cache tags on the portal truncated when the whole cache get's cleared.
As I suspect this should not be a problem, as the whole set of cache was just invalidated either way.
I think cache tags should be deleted whenever content is deleted on the system.

We should also consider deleting cache tag entries whenever the related entity is deleted as well (if possible). For example: delete of node with ID 1 will delete the node:1 tag from cachetags as well. Cache check-sums that depend on it will be invalidated based on the checksum, as the counter will not be valid (0) instead of anything that was present before.

Any other ideas are welcome.

Remaining tasks

Discussion, decision, patch...

User interface changes

None.

API changes

TBD. None expected.

Data model changes

TBD. None expected.

Release notes snippet

TBD.

Comments

ndobromirov created an issue. See original summary.

ndobromirov’s picture

Issue summary: View changes
Wim Leers’s picture

Status: Active » Postponed (maintainer needs more info)

In the following scenario with a highly-volatile custom entities - add 100k instances, delete them, add new 100k back.

The system will end up with a 200k cache tags in the table, 100k of them will not be used again ever. They will just stay there and clutter the database and cause overall slow-downs. Imagine when this process continues for a while...

This is indeed expected behavior.

You're doing something pretty atypical: creating 100K entities, deleting them, then recreating them.

Is this a custom entity type? Are entities of this type always as ephemeral as in your description?

gapple’s picture

My use case is a migration that regularly consumes content from an external source, and after a period the content expires and the nodes are deleted. It's probably (only) up to a few hundred nodes per day in my case, but there's currently 1.5m cachetags on the site due to resetting the migration frequently in the past.

ndobromirov’s picture

Status: Postponed (maintainer needs more info) » Needs review

In our case the problem is not that extreme as the example in the description, but there are still ~70k unused cache tags in the cachetags table.

It is content from an external system that is synced as local custom entities.
200-300 editors manage content from that system and from time to time delete content.
The deletes items are propagated to Drupal and gert deleted as well daily.
Deletes we do are only the delta of the change (not delete all add all), usually ranging from 50-150 deleted things daily out of ~80k total.

We had to clear some redirects related to those contents, due to URL problems (all those deleted redirect cache tags are still there), etc.

Long story short - for about an year we've accumulated ~60k+ dead cache tags in total.
They are a mix of: our custom entities, redirects, terms, nodes etc.

As this query started showing as a top DB consumer in NewRelic so I've digged in a bit :).


This is indeed expected behavior

I suspected so, as I came to the same conclusion going through the code's documentation.

The issue was opened with the aim at documenting that it's the intended behavior as well as hopefully a direction of how to clean-up the dead tags in a safe way.

The easy things I can think of doing are:
- Update hook that will truncate the table them as a one-off.
- Drush command that will do essentially the same + a full cache clear upon invocation. We could have that run once per month, so data does not pile too much.
- Another direction is to move to a Memcache / Redis implementation of the checksum service (seems there is one on both modules). As they have native LRUs internally sooner or later cache tags will get reclaimed.

Personally seems like a strange behavior to just pile the data endlessly, without any process / tool to reclaim those resources in a way different than truncating the table manually from SQL.

ndobromirov’s picture

Status: Needs review » Active

Wrong status...
There is still nothing to review.

jcisio’s picture

In one of our projects, which a lot of imports, there are 1.7 millions entries in the cachetags table. A simple query UPDATE {cachetags} SET invalidations=invalidations + 1 takes 10 seconds. Luckily Drupal 8.8 with transaction supported cachetags invalidation partly helps with that, but a large table is always a problem.

Version: 8.9.x-dev » 9.1.x-dev

Drupal 8.9.0-beta1 was released on March 20, 2020. 8.9.x is the final, long-term support (LTS) minor release of Drupal 8, which means new developments and disruptive changes should now be targeted against the 9.1.x-dev branch. For more information see the Drupal 8 and 9 minor version schedule and the Allowed changes during the Drupal 8 and 9 release cycles.

Version: 9.1.x-dev » 9.2.x-dev

Drupal 9.1.0-alpha1 will be released the week of October 19, 2020, which means new developments and disruptive changes should now be targeted for the 9.2.x-dev branch. For more information see the Drupal 9 minor version schedule and the Allowed changes during the Drupal 9 release cycle.

Version: 9.2.x-dev » 9.3.x-dev

Drupal 9.2.0-alpha1 will be released the week of May 3, 2021, which means new developments and disruptive changes should now be targeted for the 9.3.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

toamit’s picture

Just released Cache Utility contributed module to truncate cachetags and cache_* table via the UI, Drush and Curl commands. With this module I can set a cron job to periodically purge the cachetags table to keep database sizes sane.

jweowu’s picture

Could someone give a summary of the logic behind this behaviour?

In particular, why cachetags is excluded from a full cache purge.

If the purpose of the cachetags table is to detect whether or not it would be valid to obtain and use a given cache entry, but the entire cache has been purged (meaning that there are no cache entries to obtain, valid or otherwise), then why do we need that old cachetags data to stick around?

The only thing that occurs to me offhand is that, in the middle of the process of purging all of the caches one by one, some of the caches may be rebuilt before the last of the caches is emptied, and furthermore some of the new entries in the rebuilt caches might be invalidated before the remaining cache purges are completed -- in which case you'd want to retain the information about the invalidations which occurred since the purge began.

If that's why, is there any reason not to use a REQUEST_TIME timestamp instead of a sequential count invalidations | integer in the cachetags table? I may well be missing something as I've only started looking at this, but at present I don't understand why the specific number of invalidations might be needed.

The only code which calls getTagInvalidationCounts() is calculateChecksum() so my firm impression is that invalidations just needs to be different to all previously-used values for the purpose of generating a different checksum, in which case a timestamp would (a) achieve that equally well; (b) eliminate any need to do +1 calculations when updating; and (c) it would allow us to say "We started the cache purge at time T and have finished that purge, so now delete all cachetags rows with a timestamp < T, because all of those rows were made redundant by the purge."

jweowu’s picture

Thinking more clearly, a fairly-obvious answer to my question is that a given item may be cached and then invalidated multiple times during the same request, or by different requests happening at the same REQUEST_TIME, which does mean you need something like a sequence with write-locking for enforcing uniqueness.

A timestamp could perhaps be an additional column in that case, rather than a substitute for the count? The ability to flush old rows still seems important.

loopy1492’s picture

We also have 70,000+ cachetags due to weekly content import. Is there anything we can run on cron to clear this out?

toamit’s picture

@loopy1492 See a new module for this purpose noted in #11

Version: 9.3.x-dev » 9.4.x-dev

Drupal 9.3.0-rc1 was released on November 26, 2021, which means new developments and disruptive changes should now be targeted for the 9.4.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

Version: 9.4.x-dev » 9.5.x-dev

Drupal 9.4.0-alpha1 was released on May 6, 2022, which means new developments and disruptive changes should now be targeted for the 9.5.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

acbramley’s picture

We are seeing these issues as well when investigating problems with Redis. In our case, there are over 800k webform_submission cachetags sitting in Redis. Most of these submissions no longer exist as we have logic to purge these submissions every 2 weeks and archive them into an S3 bucket. Since these cache entries never expire, this over time will just continue to grow.

Version: 9.5.x-dev » 10.1.x-dev

Drupal 9.5.0-beta2 and Drupal 10.0.0-beta2 were released on September 29, 2022, which means new developments and disruptive changes should now be targeted for the 10.1.x-dev branch. For more information see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

hkirsman’s picture

We found 8 million cachetags entries. About 7.2 million of them we are sure don't have corresponding entity anymore.

Yet to figure out if that's the cause of performance issues on the site.

Wim Leers’s picture

7.2 million entities have disappeared?!

What entity type is this? Is it perhaps one without storage?

hkirsman’s picture

It's a custom feature. On Drupal side we create queue items (which is custom entity) for search indexing when saving node. As we use Elasticsearch then we don't want to make saving batch of nodes slow eg save 100 nodes, send 100 requests to Elasticsearch.

Didn't know there will be leftovers after deleting nodes / entities.

Wondering if this would be ok fix for now:

In .install file:

function my_module_update_8003() {
  \Drupal::database()->delete('cachetags')
    ->condition('tag', 'elastic_request:%', 'LIKE')
    ->execute();
}

And in that custom entity class overwrite postDelete(). Didn't see postDelete() doing anything other than invalidateTagsOnDelete. I would have also called parent::postDelete() if it would not be somehow re-adding that entry to DB.

class ElasticRequest extends ContentEntityBase implements ElasticRequestInterface {

  public static function postDelete(EntityStorageInterface $storage, array $entities) {
    // Delete elastic_request cache tag when deleting elastic_request entity.
    // @todo Remove postDelete() after https://www.drupal.org/project/drupal/issues/3097393 is fixed.
    foreach (array_keys($entities) as $entity_id) {
      \Drupal::database()->delete('cachetags')
        ->condition('tag', 'elastic_request:' . $entity_id)
        ->execute();
    }
  }

  ...

Version: 10.1.x-dev » 11.x-dev

Drupal core is moving towards using a “main” branch. As an interim step, a new 11.x branch has been opened, as Drupal.org infrastructure cannot currently fully support a branch named main. New developments and disruptive changes should now be targeted for the 11.x branch, which currently accepts only minor-version allowed changes. For more information, see the Drupal core minor version schedule and the Allowed changes during the Drupal core release cycle.

SamLerner’s picture

Could someone describe what the potential negative impact is to clear the entire cachetags table?

Or, how do you determine what tags are no longer in use? Check each one to see if it references deleted stuff? Could we use that to run a cleanup command on a regular basis?

gapple’s picture

As I understand, there is no problem if cachetags are cleared at the same time as a full cache flush - a new cachetag entry will be added to the database as needed when a tag is next invalidated. If done automatically when flushing the cache, it would just make the operation take longer.

If tags are cleared without clearing cache items, there is a specific, probably unlikely, case where a cached item could have the same checksum and be out-of-date but still served.
The checksum is a sum of the invalidations on the item's tags, so if:
- Item saved with: A:1,B:0 (checksum 1)
- tags are cleared, so invalidations are reset to 0
- B is invalidated
- the cache item is valid against the new checksum of A:0,B:1 despite being out-of-date

The checksum is checked for equality, so a lower checksum from the database will still invalidate the cache item (e.g. an item with two tags' checksum is 12, but the cachetags in the database have both been reset to 0 - since 0 != 12 the item will be invalid).

jweowu’s picture

IIUC (see #12) the main problem will be that you cannot (reliably) clear all cachetags "at the same time" as clearing the caches.

Even trusting that all cache backends will reliably behave the same way, clearing caches invokes hooks so that modules can react and clear their data, and I assume there's no guarantee that along the way some of that arbitrary code doesn't cause something to be cached and invalidated.

If all of your caches were in the database and you knew where they all lived and you executed a single transaction which (only) truncated all the cache tables, deleted any other data which was required, and deleted the cachetags then I imagine that would be fine; but in practice I don't think "clearing all caches" is nearly so clean a process.

You can't naively purge cachetags before clearing other caches, because cache entries may be needed in the process of clearing caches.

You can't naively purge cachetags after clearing other caches, because in the process of clearing caches you may have acquired and invalidated new cache entries.

(n.b. This is my speculation -- I don't have a deep understanding of these processes so I might be wrong, but thinking about the issue had led me to those conclusions.)

I expect we need a way to mark the pre-existing cachetags prior to the full purge, so that entries which are unchanged following the purge can be recognised and removed.

Eduardo Morales Alberti’s picture

Some questions are related here with Vollatile Custom Entities, all entities' cache tags are invalidated on post delete, which makes sense if you are using these entities on cached places, like the case of Nodes, but if those caches tags are not used anywhere, it is better to override the method postDelete to avoid add new entries to the database,

Method postDelete EntityBase.php:

  public static function postDelete(EntityStorageInterface $storage, array $entities) {
    static::invalidateTagsOnDelete($storage->getEntityType(), $entities);
  }

Overriding the postDelete method will not invalidate the tags on delete.

Amavi’s picture

Hi :)

I use Drupal with OVH since 5 years, all is very nice with Durpal, but complex as you know.... But I have ONE problem since 5 years, all time the same, and 5 years I looking for a solution for that...

My SQL become big all time.... WIth PHP 8 I need to Clear Cache all days or my DATA SQL go to more 8giga... ANd I am block all time ba my hosting.... :(

SOmetime ca explain me a solution please? 5 Years I try....

I am on Drupal 9.5

Thx. :)

catch’s picture

Replying to #26: Cache tags can pretty reliably be cleared after bins are emptied. If a new cache item is created and not invalidates, its cache tag checksum would be 0 which will match the cache tag blank slate.

If it's created and then immediately invalidated, it's cache tag checksum will be 1 or more which will not match 0, and then it would be invalidated when next retrieved.

The only case that is not covered is if it's invalidated, has a checksum of e.g. 1, the cache tags are wiped, it is not requested, then they're invalidated in such a way that the checksum matches again and its only requested again then. This is such an extreme edge case/race condition that it can probably be ignored.

A workaround would be a cache tag last garbage collection timestamp to compare against and write that alongside the cache items. Then anything created before it would also be wiped but that adds an extra state or k/v request on every request.

jweowu’s picture

> This is such an extreme edge case/race condition that it can probably be ignored.

I can only assume that wasn't the conclusion when this was implemented, though -- it doesn't seem plausible that the question of clearing cachetags along with other caches never came up in discussion at the time.

> A workaround would be a cache tag last garbage collection timestamp to compare against

Yeah, I was also pondering that approach in #12 and #13. I think it's a pretty reasonable idea.

An alternative would be to have a pre-purge process which copied the cachetags table to a temporary table, and then post-purge deleted all cachetags matching an entry in the temp table. That has its own noteworthy costs, but they're isolated to the time of the purge. If the table was really gargantuan, though, it might not be great (and at the point in time when a fix is deployed, some sites are inevitably going to have tables fitting that description). Offhand I think I'd lean towards the timestamp column.

jweowu’s picture

Amavi: Is that on account of the cachetags table specifically? If a normal cache rebuild fixes things for you -- if temporarily -- then it's definitely not about cachetags (as the entire reason for the present issue is that cache rebuilds do not purge the cachetags table).

Your database will have many different cache tables, and I suspect your problem is something different to this issue. You should start by confirming which specific cache table(s) are getting so large, and then you can look for or post an issue related to that.

catch’s picture

#636454: Cache tag support is the original issue. I worked on it at the time, haven't reread it for ages, but it would not surprise me if purging just didn't get discussed or got deferred to a follow-up that never happened. It was more or less the first API addition in Drupal 8 and years before a release so that particular problem was very abstract for a very long time - no real sites used it for ages.

jweowu’s picture

A more-palatable variant of the temporary table suggestion has occurred to me. I don't know how practical it is, but in principle I think this avoids the down sides of the other approaches mentioned.

In essence, while the cache rebuild is taking place, new cache invalidations get written to a temporary table. Then, after the cache rebuild, the cachetags table is truncated, and the rows of the temporary table are inserted.

In order for that to work, cache lookups need to know about the temporary table, something like:

if (a cache rebuild is in progress) {
  check the temporary table for cache validity
  if (the temporary table contained a row for that cache id) {
    return result;
  }
  else {
    // nothing about this ID in the temporary table
    check the regular cachetags table
    return result
  }
}
else {
  // not currently rebuilding the cache
  check the regular cachetags table
  return result
}

And similarly, when invalidating a cache entry the new invalidations value written to the temporary table would be an increment of the value in the temporary table if a row existed there already, and otherwise an increment of the row from the original cachetags table.

The lookups during cache rebuilds could be on a join of the two tables, rather than two separate look-ups.

No timestamp column needed, and no wholesale copying of cachetags; and outside of cache rebuilds the behaviour can be much the way it is at present.

Is that practical? I won't be surprised if I'm missing something, and I haven't thought through the ramifications of multiple simultaneous cache rebuilds (if that's currently permitted to happen), but it seemed worth suggesting.

catch’s picture

I think we should just add cache tag purging as part of a drupal_flush_all_caches() step, immediately after emptying the bins, and document + open a follow-up for the potential race condition where a cache item is both set and and invalidated but then not requested until after a further invalidation again, the chances of that happening are miniscule but the potential issues deriving from storing timestamps or creating temporary tables will affect everyone.

Also I think it's worth looking at starting and ending a database transaction in drupal_flush_all_caches() in case that's viable.

raduciobanu’s picture

Got the same issue but on a larger scale on a site with lots of webform submissions, around 3.5M+ and for each submission there's an entry in the cachetags table, currently sitting at a bit more than 4M rows.

MariaIoann’s picture

I have the same problem with Message entities. I am creating messages as notifications for a large number of users, but they are being purged when they are 30 days old. Currently, we have 400K messages, but the cachetags table has 23M entries for messages, as it includs all deleted messages as well.
Is it safe to delete the relevant cache tags on a message postDelete hook?
And is it safe to batch delete all cache tags of non existing messages as an one-off cleaning action?
What I have not understood is why don't we delete an entity's related cache tag when the entity gets deleted in general?

catch’s picture

What I have not understood is why don't we delete an entity's cache tag when the entity gets deleted in general?

Cache tag storage and implementation is swappable, so there's no inherent concept of a cache tag existing as a row in a database with a counter. It would for example be possible (but very slow) to store the cache tags with the cache items, and query all cache items when a cache tag is invalidated, and have no dedicated cache tag storage at all. Because of this, there's no concept of the tag as a thing existing in itself, the checksum implementation that core uses introduces the counter system on top of string tags, but consumers of the cache API, like the entity system, don't need to know about it - they just get/set/delete cache items and invalidate tags.

If you have a high traffic site with a lot of users/content, you should strongly consider using https://www.drupal.org/project/redis, which won't run into this problem because it evicts items when it runs out of memory. This is a good idea for lots of reasons other than a large cache tags table.

Wim Leers’s picture

@MariaIoann

  1. Nothing prevents you from saying "my entity does not need cache tags". See \Drupal\Core\Entity\EntityInterface::getCacheTagsToInvalidate() and \Drupal\Core\Cache\CacheableDependencyInterface::getCacheTags(). Message entities the way that you describe them appear very ephemeral, so it makes sense to me that they would not use/need cache tags.
  2. See \Drupal\Core\Datetime\Entity\DateFormat::getCacheTagsToInvalidate() for another example.