Follow up for #1261846-22: Document 1MB maximum size limit for cache_set() - #24
Problem/Motivation
Orig issue added a comment as a stop gap. Really though, the problem needs to be fixed.
(update this after reading orig issue more carefully. to be done by anyone.)
In a way, it would be better if the cache system handled this. Checking the size of an object or array could be really fast or really slow, depending on the PHP internals. I don't know. If slow, it would obviously be a bad idea. But if it is very fast, than it may be better to move that check into the cache system.
(especially since 1MB isn't actually a hard limit, it depends on configuration), it's to just be careful not to dump massive amounts of data into the cache and assume it'll work. If you checked the size then decided not to write to cache, that'd be worse than throwing errors since it fails silently in that case and the main issue with failing to write like this isn't the PDO exception is the cache miss every request (since usually the first thing to go is the theme registry cache which takes about one second to build).
There's a patch against Memcache to log when writes fail and include the size of the item: #435694: >1M data writes incompatible with memcached (memcached -I 32M -m32). That patch isn't ready to go in yet, but we could potentially do something with a try/catch in individual cache backends.
Proposed resolution
TBD
Remaining tasks
fill out this issue summary
User interface changes
No ui changes.
API changes
TBD.
Original report by @marvil07, @valthebald, ...
Actually, by a lot of people. Please read the orig issue: #1261846: Document 1MB maximum size limit for cache_set()
| Comment | File | Size | Author |
|---|---|---|---|
| #1 | check_string_length.txt | 884 bytes | brianV |
Comments
Comment #1
brianV commented> Checking the size of an object or array could be really fast or really slow, depending on the PHP internals
There are really only a few ways to check the in-memory size of a variable.
The first way is to use memory_get_usage() to get the current memory usage, create a second copy of your item, call memory_get_usage() again, and check the difference. Note that Objects and Arrays would need to be 're-created' key-by-key rather than just doing
$temp = $data;since PHP only does a shallow copy (ie, updates internal references) in this case rather than allocating memory for the second copy.The other option is to serialize arrays and objects into strings, and check the length of the serialized version, which won't be 100% accurate unless we are storing the serialized version anyways (ie, DatabaseBackend). Running the attached file gives the following execution times over 100 million iterations:
How should a failure be handled?
Comment #2
markpavlitski commentedI've posted a patch to the memcache issue which involves serializing the item and splitting it into chunks if it's too big (by detecting a memcache specific error code).
We could take a similar approach with the core database cache.
Otherwise we could log an error for the site admin, to increase the value of max_allowed_packet.
Comment #3
danblack commented> We could take a similar approach with the core database cache.
I wouldn't worry about the DB, all seem to have 1Gb or larger limits.
Comment #4
markpavlitski commentedAccording to the docs, the default value for MySQL is 1MB. Some, but not all, Linux distributions ship a default my.cnf that raises this (usually to 4, 8 or 16MB).
https://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sys...
Perhaps there should be an installation instruction for the user to change the limit, or a hook_requirements check somewhere?
Comment #5
danblack commentedDB covered (partially) in #972528: dblog fails to log MAX_ALLOWED_PACKET errors because they're longer than MAX_ALLOWED_PACKET.
Comment #6
danblack commentedComment #20
smustgrave commentedThank you for creating this issue to improve Drupal.
We are working to decide if this task is still relevant to a currently supported version of Drupal. There hasn't been any discussion here for over 8 years which suggests that this has either been implemented or is no longer relevant. Your thoughts on this will allow a decision to be made.
Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information to help with the issue, it may be closed after three months.
Thanks!
Comment #21
smustgrave commentedSince there's been no follow up going to close out, but if still valid in D11 we can always re-open.
Thanks all!