authcache_builtin_expire_v2_expire_cache() can easily attempt to clear hundreds of thousands of cid's in one hit and cache_clear_all will blow out in memory usage

In this case I've seen $cids contain more than 300,000 rows

My patch breaks the processing up into chunks of some arbitrary number which seems to stop memory blow outs

Support from Acquia helps fund testing for Drupal Acquia logo

Comments

dgtlmoon created an issue. See original summary.

dgtlmoon’s picture

dgtlmoon’s picture

Slight change, re-include the !empty($cids);

dgtlmoon’s picture

with the define() removed and a variable_get in place, and a lower chunk value

dgtlmoon’s picture

dgtlmoon’s picture

This patch is even better, limits the size of the $cids array

znerol’s picture

Is this on a multilingual site? If this is the case, would you mind testing the patch in #2507797: authcache_builtin_expire_v2_expire_cache is a killer with locale on.

dgtlmoon’s picture

Unfortunately not, single language site

dgtlmoon’s picture

The issue in general is that $cids can get so huge it can eat up an extra 200Mb of RAM when saving/editing nodes etc (our $cids was over 310,000 entries)

znerol’s picture

I feel that 300'000 entries is an awful big number. How long does it take to perform that operation? Keep in mind that the purpose of the Cache Expiration module is to selectively remove items from the cache. That said, first thing I'd do on a site like this would be to double check the expiration rules in order to reduce the number of entries affected by an operation.

If it is really necessary to kill the cache of so many pages, then it might be just cheaper (and far less complex) to stick with the default behavior and remove the Cache Expiration module from the mix. Operations on the content will then simply kill the whole cache.

My technical reasons for being skeptical about this patch are the following:

  1. The executeInternalExpiration method in the Cache Expire module does not seem to require this.
  2. If we really want to provide support for expiring an arbitrarily big number of cached items, then the proper way to do this is queueing/batching. Otherwise someone else will run into another limit (e.g., execution time).
znerol’s picture

Status: Active » Postponed (maintainer needs more info)
znerol’s picture

Status: Postponed (maintainer needs more info) » Closed (outdated)