Hi all,

This seems to be an issue of many custom fields causing an error in the cache for the view. I'd appreciate some momentary attention.

One confusing part of this is that turning of caching doesn't fix it. Why should the cache table be called at all if caching is turned off? Is this a bug?

When it showed the whole query it was very large but still probably below a megabyte, and the host has updated max_allowed_packet setting to 16 MB.

user warning: Got a packet bigger than 'max_allowed_packet' bytes query: UPDATE cache SET data = 'a:4:{s:6:\"tables\";a:111:{s:29:\"node_data_field_referral_code\";a:5:{s:4:\"name\";s:21:\"content_type_referral\";s:4:\"join\";a:2:{s:4:\"left\";a:2:{s:5:\"table\";s:4:\"node\";s:5:\"field\";s:3:\"vid\";}s:5:\"right\";a:1:{s:5:\"field\";s:3:\"vid\";}}s:6:\"fields\";a:1:{s:25:\"field_referral_code_value\";a:9:{s:4:\"name\";s:41:\"Text: Referral Code (field_referral_code)\";s:10:\"addlfields\";a:0:{}s:8:\"sortable\";b:1;s:13:\"query_handler\";s:33:\"content_views_field_query_handler\";s:7:\"handler\";a:2:{s:33:\"content_views_field_handler_group\";s:21:\"Group multiple values\";s:35:\"content_views_field_handler_ungroup\";s:28:\"Do not group multiple values\&quo in /home/example/www/includes/database.mysql.inc on line 172.

Many thanks, this has got us stumped.

- ben :: http://AgaricDesign.com

Support from Acquia helps fund testing for Drupal Acquia logo

Comments

merlinofchaos’s picture

One confusing part of this is that turning of caching doesn't fix it. Why should the cache table be called at all if caching is turned off? Is this a bug?

That's *page* caching that you can turn off; you can't turn off Drupal's caching in general. Performance would be ludicrously bad.

I can't think of any decent solution to this, except to turn off any modules that might export Views data to reduce the amount of data that's being cached.

mlncn’s picture

Thanks Merlin.

Turning off Drupal and PHP's error reporting hides the problem adequately ;-)

We tried to turn off anything non-essential, but CCK and Views are essential and the two alone trip up these SQL updates and inserts into cache.

I don't suppose there's a Beginner's Guide to Drupal Caching where I could begin to look around for ways to modify what is cached or how it's done?

Pointers in that direction further appreciated.

linuxpimp’s picture

Version: 5.x-1.6 » 5.x-1.x-dev
Category: bug » support
Status: Closed (duplicate) » Active

All

I am having the same issue and am trying to understand something:

Is the output
user warning: Got a packet bigger than 'max_allowed_packe.........

Just a warning or is there data loss? Can i ignore it? What is going on in the back end?

Thanks in advance for your patience :-)

lp
router comparison

jpsalter’s picture

Just moved a Drupal site to a new server and this error appeared. Any suggestions?

jpsalter’s picture

Category: support » bug

Follow up --

1) commenting out the trigger_error() function solves the watchdog problem, but does not solve the views problem
2) this is happening in version 4.7.6 too
3) views does not seem to be loading
4) my server is via Pair.com - I don't think it has any unusual configurations

Changing from support -> bug report

nbayaman’s picture

This is not the Views issue. This kind of errors appears when you sql queries reach max_allowed_packet size which defined in you mysql configuration file. So if you increase this variable (set-variable = max_allowed_packet=<em>increased value</em>) this problem will be solved.

jpsalter’s picture

Follow up -

Thanks nurlan.bayaman,

I can confirm that this was the problem. I had moved my Drupal site to a new webserver. The difference between the two servers was max_allowed_packet = 1M vs. 5M.

This can be a serious problem - since many shared hosting environments use the default mysql setting of max_allowed_packet = 1M and do not allow changes the the shared environment.

Additionally, the original posting indicated they had bumped the value up to 16Mb. It seems like this should have done the trick.

merlinofchaos’s picture

While I understand this is a problem, fixing this is a major task. If Views were a car, fixing this would require pulling out the engine, taking it apart, replacing a key component, putting it back together and putting it back in. Not easy to do.

It's on my list but unfortunately can't be fixed for awhile.

moshe weitzman’s picture

earl - i guess fixing this would involve moving away from serialized array in cache table and into a dedicated table(s) that are queried on demand?

i have enough cck fields on Observer that this was happenning on my dev sever.

merlinofchaos’s picture

Yea. I've had some discussion on this topic via IRC, about how this could be implemented. This would be a MAJOR change to Views core, so it's not something that is easy or quick or will happen soon.

webchick’s picture

Subscribing. We fixed this by starting mysql with --max_allowed_packet=32M, fwiw. Shared hosts of course don't have this option, though.

svogel’s picture

Subscribing, too.
I have this issue when using cck and the location module. Somehow the views-module tries to add ALL the locations (around 8000) as an option-form-element to the cache.
It wouldn't make any sense to put that into the cache-table, no matter how large I will set the mysql packet-size.
Any progress made with this bug?

bonobo’s picture

Seeing similar issues as well -- we are on a site using cck, views, and the location module.

We addressed this using webchick's suggestion of bumping the cache to 32 MB --

Recognizing that this is a stopgap solution, I have one main question:

What governs how the packet size grows?

bonobo’s picture

sorry -- when I said cache, I meant max packet size --

marcp’s picture

As an interim solution, how about using gzcompress to compress the data before sending it to the database, and doing the corresponding gzdecode on the way out?

There are 6 calls to cache_set() and 6 to cache_get() in views.

We can turn the 6 cache_set calls into 6 calls to our new views_cache_set() function, where we can do the serialization as well:

function views_cache_set($cid, $object) {
  $data = serialize($object);
  if (views_use_compression()) {
    $data = gzcompress($data);
  }
  cache_set($cid, 'cache_views', $data);
}

and the 6 cache_get calls into 6 calls to our new views_cache_get() function, which can also be held responsible for unserializing the object:

function views_cache_get($cid) {
  $data = cache_get($cid, 'cache_views');
  if ($data) {
    if (views_use_compression()) {
      $data = gzuncompress($data);
    }
    return unserialize($data);
  }
  return 0;
}

This at least abstracts out all the cache_set and cache_get calls. Ideally it would be pluggable or hookable so we could cache to disk or wherever instead of to the db... A little extra code in there would allow us to dynamically split the strings into multiple rows if we bump up on database field size limitations, etc...

Here's the configurability part of it:

function views_use_compression() {
  return variable_get('views_use_compression', 0);
}

What's missing? The compression doesn't solve the problem, but it seems like a start...

?,

Marc

merlinofchaos’s picture

Actually, the compression may solve the problem; because of the type of data, we should see 90%+ efficiency. There's LOTS of repeated data in serialized arrays.

This approach seems like a nice solution that I had not considered. Can you work up a patch?

It needs to be smart enough to tell if it's seeing compressed data, too, or the actual update will really suck for people. A simple token at the front of the data should work.

One thing I'm not sure of is if the cache_* functions can handle the compressed data. Though we do have page compression so I bet it can. Also be sure to check to see if the function is available, as I've found that not all hosts have gzip extensions compiled in.

marcp’s picture

Good point about checking for the existence of the compression routine. I'll hunt around to see how page compression is done.

Can we do this without a token if:

  1. We default to not compressing the data
  2. We clear the views cache upon saving the settings

?

merlinofchaos’s picture

My worry about not using the token is that someone upgrades to the next version of Views, but doesn't immediately run update.php -- their views cache will appear corrupted and will make their views appear corrupted. If they then save a few, because of the order of things, it could trash their view.

Maybe I'm overthinking that, but that's my worry.

marcp’s picture

I understand your concern.

I was thinking of forcing the user to turn on compression via the views settings page (which doesn't exist, although admin/build/views/tools, which is provided by views_ui, could be a good fit also). The default would be "no compression," so anyone upgrading would still be forced to visit the settings page to turn on compression. When they turn compression on via the settings page, we clear the cache, so there's really no chance of them attempting to uncompress something that hasn't been compressed. Unless, of course, they manually change the setting in the variables table.

Also - if there is to be a settings page, which I think there needs to be, where should it go?

bonobo’s picture

I think this could be handled via a few complementary steps:

1. documentation on the new functionality with the new release -- highlight the ability to turn compression on or off via the ui -- I'd be glad to write this as the functionality develops.

2. as Marc suggests, the default is compression turned off. Turning the compression on would clear the cache, which would eliminate the possibility of a corrupted view

moshe weitzman’s picture

I don' see a benefit to providing a ui for this. if the gzip extension is present, use it ... as for the upgrade case, views should always dump cache whenever update.php is run. if it doesn't already, i consider that a bug.

i know that page cache recently got a UI but thats different - the problem there was bad interaction with apache and php.ini gzip - we don't have that interaction here.

marcp’s picture

Earl's worried about the case where the user upgrades and doesn't run update.php, which I've done many times myself...

If we default to using compression and we don't provide a UI then we'll need a token [in each cached row of cache_views], because the data's not going to be compressed and there'll be no way to automatically clear the cache before expecting compressed data.

This is going to be a pretty simple and hopefully risk-free patch, once we decide on what it's supposed to do.

Here are the open questions in my mind:

1. Use compression always if gzip is present?
2. Token or no token?
3. Configuration option to turn on/off compression?
4. If configuration option, where does it go and who provides it? (ie. admin/build/views/settings provided by views.module)
5. Default to compressed

The most important things to me are that:

1. Nothing breaks if the user upgrades and doesn't run upgrade.php
2. Users get relief from the max_allowed_packet errors
3. The data in cache_views is clean
4. Views cache code gets abstracted out

Marc

bonobo’s picture

The UI adds some layers of protection between user error and negative consequences to the site -- while the obvious preference would be for no user error, I'd say we are a ways off from that ever being something we can rely on :)

So, adding in a UI, and having the default be no compression would protect against users who forget to run settings.php --

In looking at Marc's points above:

Here are the open questions in my mind:

1. Use compression always if gzip is present? -- no -- turn it on via the UI to protect users who don't run update.php
2. Token or no token? -- no token needed if compression is off by default, and if the use of compression is governed via a UI
3. Configuration option to turn on/off compression? -- yes
4. If configuration option, where does it go and who provides it? (ie. admin/build/views/settings provided by views.module) -- still open
5. Default to compressed -- no, for reasons above

The most important things to me are that:

1. Nothing breaks if the user upgrades and doesn't run upgrade.php -- addressed via a UI
2. Users get relief from the max_allowed_packet errors -- addressed via compression
3. The data in cache_views is clean -- addressed via a UI, and cleaning out the cache when compression is turned on
4. Views cache code gets abstracted out -- beyond my level of knowledge to speak intelligently on this one --

merlinofchaos’s picture

I think the token is easier and less confusing; most users won't have any idea what cache compression even means.

There's a 'tools' tab on the main Views UI that has a clear cache button. Should we do a UI, that's where I'd put it. (There's where I also intended to put a 'disable caching' checkbox, but never got around to that either. Though this patch could at least provide the hooks for something like that).

marcp’s picture

If we're going to go with the token, then I can see how it makes sense to:

  1. Check for gzlib, and if it exists, then compress/uncompress
  2. Forget about the configurability of it right now - that's probably another patch which would incorporate the cache/no-cache option

If you guys buy off on this, I'll code it up.

One more thing - are you okay with these function names & parameters?

function views_cache_set($cid, $object);
function views_cache_get($cid);

Marc

merlinofchaos’s picture

That sounds good to me.

dvessel’s picture

subscribing

svogel’s picture

As I see, the size of the structure to be put into the cache depends among other things on the number of terms in the vocabularies. Currently I have a rather large vocabulary (> 8000 terms).
What sense does it make to put that into the cache? I tried with a packet-size of 48 MB and it still doesn't fit! Putting 48 MB into a cache-table isn't that reasonable.

I suppose the time to put that into the cache and retrieve it again might be more than to retrieve the information from the right-place anyway if needed(!).
So maybe instead of compressing the info to fit into the cache (which puts even more performance-overhead to the caching) it should be checked if the size of what's put into the cache is too big. AND only retrieve those information that's really needed!

I solved my problem with simply commenting out all cache_set-calls. Well, this doesn't solve the problem actually but I simply don't get those "Got a packet bigger than 'max_allowed_packet'" warnings any more. And that's fine with me. Everything else works like a charm and there isn't even a performance-lack.

Not everything must be put into the cache-tables.
And there might be cases (more terms, more cck-fields) when the information will not fit into the cache even if it's compressed (see above).

Best regards
Stefan

mlncn’s picture

Pointing out a discussion on CCK and indexes for cross-fertilization. Could this help?

Best practices for managing indexes (particularly on cck tables)
http://groups.drupal.org/node/7614

Should we find a way to tell views in certain cases to use better indexes and skip caching?

merlinofchaos’s picture

Note that it doesn't make sense to put the whole taxonomy tree in the cache. It was a side effect of a contributed patch that I didn't notice when I vetted the patch. IN that sense, it's my fault for the poor review process. That's not the only bug this patch caused, too.

bonobo’s picture

Two questions on this:

We have been (VERY SLOWLY) working on the patch/process outlined by marcp above --

1. Given the feedback in #30, is this patch necessary for the current version of views?

2. Will this patch be necessary in Views2?

If there's still a need/interest in this, we can probably get this done within the next couple weeks. Is this still needed, or should we shelve this?

catch’s picture

subscribing.

jgraham’s picture

FileSize
4.04 KB

Attached is a patch as proposed by marcp using gzip when appropriate.

Let me know if this needs to be reworked at all.

Code was developed from the DRUPAL-5 branch in CVS.

marcp’s picture

Status: Active » Needs review

Changed status to patch (code needs review)

Owen Barton’s picture

Subscribing

emilyf’s picture

Status: Needs review » Needs work

This patch takes sucessfully but then gives a parse error for the views module.

Parse error: syntax error, unexpected '}' in /sites/all/modules/views/views.module on line 2147

Removing the extra } on line 2147 results in
Parse error: syntax error, unexpected $end in sites/all/modules/views/views.module on line 2203

Adding in missing } on final line results in:

Parse error: syntax error, unexpected T_SL, expecting ')' in sites/all/modules/views/views_cache.inc on line 280
Wasn't sure how to fix that one.

jimbop’s picture

subscribing

blackdog’s picture

Version: 5.x-1.x-dev » 5.x-1.6
Status: Needs work » Needs review
FileSize
6.63 KB

Here's an updated patch that seems to work.

Any ideas what the compression does to the CPU load?

TC44’s picture

subscribing

grah’s picture

subscribe

decafdennis’s picture

subscribing

Patch appears to solve the problem.

decafdennis’s picture

Ignore that, patch does not work. Views appear to stop working after being cached.

nath’s picture

Wouldn't a patch stopping the caching of the taxonomy tree make more sense?

merlinofchaos’s picture

nath: Yes. Absolutely. I've been hoping.

doc2@drupalfr.org’s picture

What about cck data too? I have about 1000 terms and thousands of CCK values...

Thanks for the contributions to this topic. Very interesting indeed. A question remains. If, as Merlin says in #16, "Actually, the compression may solve the problem; because of the type of data, we should see 90%+ efficiency. There's LOTS of repeated data in serialized arrays." maybe this is an interesting feature for what's left in the tables. Don't you think so?

Yet the problem remains. Any patch under development?

Greetings, Arsène

doc2@drupalfr.org’s picture

Wow, this gets critical for me...

Indeed, I already reported the bug elsewhere: #235891: 5.7 bug: "Warning: Got a packet bigger than 'max_allowed_packet' bytes query: INSERT INTO watchdog"

My problem is, while waiting for being able to increase mu memory limit, to downgrade some features and content types in order to keep the others running.

Thus, any solution (compression or deletion) is welcome to go on running our sites!

EDIT : The following seems not secure!
Commenting out (see comment #6) would be enough as a temporary workaround... To do so, in drupal/includes/database.mysql.inc comment out line 172.
WARNING:
- I've already encountered the white page issue by ignoring this error (before commenting out), so you may face it as well.
- After commenting out, I sometimes get parts of my pages' layout messed up.

-> Does this trick stops inserting data into tables or does is just stops reporting errors?

Please let us know out there!

joshua_cohen’s picture

Bookmarking - I too am seeing this issue when using multiple CCK fields and a lot of taxonomy terms.

Josh

bjacob’s picture

subscribing

bjacob’s picture

subscribing

marcp’s picture

If you are just subscribing to this issue, please try the patch in #38. I just tried it out and the 4 or 5 views I tested seem to be working fine for me, but I haven't done a rigorous test, and I also wasn't encountering the max_allowed_packet error before applying the patch.

@naquah in #42 -- what exactly is wrong with your views once they are cached compressed? Can you narrow it down to a very simple view that gets broken?

Again -- please let's have a few others test out the most recent patch.

blackdog’s picture

In my updated patch in #38 all I did was fixing the errors that was thrown in the previous patch.
I'm using this patch on a live site without trouble, at least that I can relate to this patch.

naquah (#42): are you sure that the views stopped working, and is it clear that it was caused by this patch?

doc2@drupalfr.org’s picture

But why storing unnedeed data? Therefore why using more cpu for compressing this?

Or maybe you can proove the utility of this data in the views cache... svogel in #28 says that it's really not necessary on his big site...

So, can't there be a patch for stopping this data to go into the cache? Or, if this data is revealed more efficient when cached, to check its size before (and eventually compress it after)?

EDIT: Because the #6 hack led me to some display problem, as feared in#46 (part of my content wouldn't show), but no error message, and is therefore dangerous to keep running on, it could be better to use svogel's #28 suggestion.
> Where to comment this cache_set calls?

For Views 1.6, I found:
- in views_cache.inc : lines 331, 284, 246, 146, 57
- in views.module : (doesn't seem necessary to comment, see #54) line 203

Can anyone (svogel) confirm this?

I'll let you know here if ever I come accross trouble with this hack.

Thanks in advance, Arsène

bcn’s picture

subscribing...

svogel’s picture

Hi Arsène (#52),

I uncommented the same lines in views_cache.inc.
But I forgot the one in views.module #203 ... seems to me that this call didn't bother me.
I'm still of the opinion that the full taxonomy-tree shouldn't be put into the cache. What sense did that make ... especially for large taxonomies.
For me uncommenting the cache_set -calls worked fine.

Best regards
Stefan

blackdog’s picture

Yeah, it seems to work well when all cache_sets are commented out, but now I have roughly 200-300 more SQL queries on just about every page, which is less good. Of course it depends on the current site and how many/how complex views that are used.

marcp’s picture

It seems that a lot of folks are interested in a fix for this issue, but we're not getting any input from Earl, so here are some questions for him that might help shape this discussion:

1. Will you consider committing a patch that does compression without addressing the taxonomy issue?
2. Will you consider committing a patch that only addresses the taxonomy issue?
3. In order to address the taxonomy issue -- which patch was it that you committed (this may help someone else propose a new solution)?

Thanks from all of us for all that you've done for the community!

Marc

dharamgollapudi’s picture

Subscribing....

doc2@drupalfr.org’s picture

To precise Marc's comment #56 questions: Just remember that this is not only about Taxonomy data but CCK content as well (cf. Title).

billmurphy’s picture

subscribing too..

merlinofchaos’s picture

I'm sorry, I didn't understand I wasn't clear.

I am not convinced the compression patch is the way to go; it's a performance drain that would need to be tested to see if caching is even worth it at that point.

I would absolutely accept a patch that fixes it so that taxonomy is not stored.

Views can't do much at all about CCK, so even though this issue might be about how much data CCK stores, I have zero control over there.

Summit’s picture

After updating pahtauto to 2.2 on drupal 5.7 (from pathauto 1.2) I got this user warning

Got a packet bigger than 'max_allowed_packet' etc..

Subscribing, greetings, Martijn

doc2@drupalfr.org’s picture

I'm not sure this is specifically related to path_auto. The max_allowed_packet error happens to be quite capricious on the way it shows up. For me, the error appeared with the book module, and disappeared (temporary) after disabling it. Untill I got that it was related to THIS views issue.

But maybe pathauto does store more info in cache in its 2.2 version than in its 1.2... which would have triggered the error. I suggest you read the whole issue. Good luck now!

edit

Summit’s picture

Hi Blackdog,

Didn't you miss one:
Line 275:

cache_set('views_query:' . $view->name, 'cache_views', serialize($info));

to

views_cache_set('views_query:' . $view->name, $info);

Can somebody else confirm this is working?

greetings,
Martijn

jgoldfeder’s picture

subscribing

esllou’s picture

subscribing. I have a 3700 term taxonomy on a shared server with 4mb max_packet and I'm getting either white page of death or the enormous error message every time I load a page so I've had to delete all my terms.

merlin: can't you roll back the patch which introduced this problem or I suppose it's too ingrained into many successive changes by now?

nath’s picture

AFAIK this wasn't a separate patch introducing taxonomy caching but a side effect of a patch doing something different.

sun’s picture

subscribing

Will Kirchheimer’s picture

Feed back on Patch in comment #38

Using it + issues:

I installed the patch and cleared the cache using: cache_clear_all();

Ran into some error messages that I am guessing were ACL / Content Access related when viewing pages that have been built with views.

Rebuilt permission tables (turn off the ACL and Content Access modules, hit rebuild permissions)

No luck, gave up for the day, came back next day and everything worked. I am guessing cron did something?

Day 2 of use in the dev site, everything seems to be nice and snappy, but I don't have a way to gauge resource use... seems great

-
Scenario for why I use this change to the module

I am trying to move my site to Media Temple DV, which has a managed virtual server option where they maintain the OS updates. Not having to maintain a server just to have a website would be great for my client, however MT refuses to up the max packet size from 1meg unless they switch you to a root access account, and disable the automatic server updates in the process.

This patch lets me slip past the issue... of course so would MT upping the MYSQL max_packet_size to 16megs like they have it on the GS

Will Kirchheimer’s picture

On further testing I seem to get the errors back if I do a clear cache, I'll try and track this down more, and provide in update when I have more info:

'INNER JOIN node_access na ON na.nid = node.nid WHERE (na.grant_view >= 1 AND ((n' at line 1 query: INNER JOIN node_access na ON na.nid = node.nid WHERE (na.grant_view >= 1 AND ((na.gid = 0 AND na.realm = 'all') OR (na.gid = 1 AND na.realm = 'content_access_rid')))

webchick’s picture

Thats likely unrelated; see http://drupal.org/node/217015.

Will Kirchheimer’s picture

true, except, only occurs while patch is in place.

Seeing some other odd behavior like the below sql errors while browse non station.module views. also stopped upon removal of patch: (notice odd period behavior)

SELECT DISTINCT (
node.nid
), timestamp, station_playlist.timestamp AS station_playlist_timestamp, node_data_field_playlist_time_start.field_playlist_time_start_value AS node_data_field_playlist_time_start_field_playlist_time_start_value, node_data_field_playlist_time_end.field_playlist_time_end_value AS node_data_field_playlist_time_end_field_playlist_time_end_value, node_data_field_station_ref_show_hosts.field_station_ref_show_hosts_nid AS node_data_field_station_ref_show_hosts_field_station_ref_show_hosts_nid
FROM node node
WHERE (
.type
IN (
'station_playlist'
)
)
AND (
.status = '1'
)
AND (
.timestamp <= 'now'
)
AND (
.field_playlist_list_on_site_yn_value_default
IN (
'1'
)
)
ORDER BY DESC
LIMIT 0 , 30

Anyway, I have a server solution worked out, facing a deadline, so am dropping out of this (patch uninstalled)

Tamar Badichi-Levy’s picture

I have commented out the all the cache_set lines in views_cache.inc file. This solved the problem for now but Is there a better "Drupalic" solution? Why shouldn't be an option to remove views caching in a views settings page?

phdhiren’s picture

subscribing

cybershan’s picture

subscribing

cybershan’s picture

Hi everybody, what problem will appear if commented out the all the cache_set lines in views_cache.inc file??

I have to solved this problem asap?

thanks in advance,

merlinofchaos’s picture

Reduced performance only. No functional problems should arise.

marcp’s picture

So how about a patch that simply wraps the cache_set() calls in a variable_get('allow_views_caching', true)?

Seems like that could get committed without causing too many headaches, and then people who are having problems could manually enter the 'allow_views_caching', false row in their variable table? Someone could then write a small contrib module that provides a UI that allows users to turn on/off views caching. The impact to Views would be minimal.

Seems like the compression option is out, and the taxonomy patch will probably never arrive, yet this issue is still causing people headaches. Folks with the resources just jack up the packet size, but that leaves the lesser-resourced folks grasping for solutions.

cybershan’s picture

that's good news to me. :)

minesota’s picture

Any further solution ?
Final recommendation , so far, to avoid this ?

jhedstrom’s picture

The patch from #38 worked for me. Thanks for this.

edit: It was the patch from #38 that I used, not #33.

jhedstrom’s picture

I mispoke in #80. We started noticing odd behavior (views not returning any results etc), similar to that described above. For now, since this was critical for a deployment to a shared host, we commented out all instances of cache_set in views.module and views_cache.inc.

mariagwyn’s picture

I appear to be having this problem on Views 6.x-2.0rc1. I am not sure it is the same problem, but it started when I clicked twice in the Views UI, screwing up its process. It also initiated a javascript error with ajax, solved by turning off javascript in views. I can't increase the packet size of mysql (currently 16M, I checked with my host). I don't see any errors in the PHP log that tells me what initiates this. I have turned off EVERY module except core, even views, still get the error, emptied the cache, the views cache, everything I can think of. I even went back to an unadulterated garland theme. I used devel to uninstall views and imported two of the views I use (not the one on which all the errors started). This is the error I get:

Warning: Got a packet bigger than 'max_allowed_packet' bytes query: UPDATE cache SET data = 'a:390:{s:13:\"theme_default\";s:8:\"newskete\";s:13:\"filter_html_1\";i:1;s:18:\"node_options_forum\";a:1:{i:0;s:6:\"status\";}s:18:\"drupal_private_key\";s:64:\"92435af7f38cafa840efc37059977b734a52797e36a78463c66a842857a3eb12\";s:10:\"menu_masks\";a:28:{i:0;i:127;i:1;i:125;i:2;i:63;i:3;i:62;i:4;i:61;i:5;i:59;i:6;i:58;i:7;i:56;i:8;i:45;i:9;i:44;i:10;i:31;i:11;i:30;i:12;i:29;i:13;i:24;i:14;i:22;i:15;i:21;i:16;i:15;i:17;i:14;i:18;i:12;i:19;i:11;i:20;i:10;i:21;i:7;i:22;i:6;i:23;i:5;i:24;i:4;i:25;i:3;i:26;i:2;i:27;i:1;}s:12:\"install_task\";s:4:\"done\";s:13:\"menu_expanded\";a:0:{}s:9:\"site_name\";s:9:\"New Skete\";s:9:\"site_mail\";s:22:\"email@gmail.com\";s:21:\"date_default_timezone\";i:-14400;s:23:\"user_email_verification\";b:1;s:9:\&q in /home/mgwynm/public_html/dev/drupal6/includes/database.mysqli.inc on line 128

I did not apply the patch in #38 as it is for the 5.x version, but any help on this would be appreciated. This is a major problem (I can't even get the errors to stop writing to the top of my page (turned of statistics, logging, etc.)

Thanks,
Maria

merlinofchaos’s picture

For Views 2, see the issue entitled "Views requires a lot of memory"; this is a totally unrelated error.

cycas’s picture

I encountered this problem, which took the entire Drupal 5 site I was working down. I commented out trigger_error() in database.mysql.inc so that I could get things back together enough that I could log in to make changes.

I was getting both the 'packet too big' and an out of memory error, so I also turned the available memory up in sites/default/settings.php, which got rid of the memory error.

Copying out the trigger_error() got the site back, but I was concerned that turning off the errors might be hiding something, and also, I still had a mystery message appearing on every page sitewide, saying:

"Build observation view"

So I:

- updated to the latest version of the views module,
- updated the database
- hopefully cleared the views cache (well, you never know!)
- commented out all instances of cache_set in views_cache.inc and views.module
- ran cron.php for luck
- turned trigger_error() back on.

The site is on shared hosting and we cannot increase the packet size.

This has got the site back up. I'm editing this because I've just realised that the 'build observation view' message is probably specific to a custom module within our site so should not really be in this bug report, but am leaving the rest of it in the hope it may be helpful to someone else.

merlinofchaos’s picture

cycas: That phrase "Build observation view" does not appear within Views. You best bet is to grep the code to see where that phrase appears and see if it's a module or some custom code or something. Some hints to tracking this down would be to determine what the word 'observation' means in your system; that might point you directly to a module and/or custom code that's spitting this out. It looks like it's something that's meant to be a debug message that should've been removed or something.

twohills’s picture

I bumped up the MySQL max-packet-size to 50M rather than apply the patch from #38.
It worked but I look forward to a compression option for cache_views pleeeeese :-D

sun’s picture

Everyone having this issue, please test the patch in #218187: Views cache too large. Thanks.

babbage’s picture

I am in the process of moving a client site from one hosting provider to another, and ran into the packet bigger than MySQL max allowed packet error when trying to import the MySQL database export from the previous installation. The problem was the MySQL max packet size was set to only 1Mb on the new server—the default.

I tried to get the hosting provider to modify this default, but have not heard back from their server admins yet.

In the meantime, found a solution. From the original install (and after a full backup of the MySQL databases of course) I selected the following tables in the Drupal database: cache, the nine other tables that began with cache_ (cache_block through cache_workflow_ng... your mileage will vary depending on modules), and the table watchdog. (The latter had the largest number of items of any table—this is a recent development install—and an inspection of the contents indicated it was a table logging repeated minor errors such as a single missing image...). For those not in the know, I cleared the contents of those tables in phpMyAdmin by putting a tick next to those tables and then selecting "With selected: Empty" at the bottom of the list. Confirming this emptied these tables. Checked the site still worked fine—yep—and exported the database again. That imported with no problem and we are now up and running on the new server.

So a pain, but actually really easy to do and may be a work around in the meantime. :)

sun’s picture

As mentioned in #87, #218187: Views cache too large contains a patch that possibly circumvents this error. Still waiting for reviews.

thebenedict’s picture

The patch from #218187: Views cache too large seems to be working great for me -- this thread saved me lots of grief! I was getting the max_allowed_packet error and I suspect it was caused by a content type with 20+ cck fields, two of which are large select lists. I'm using cck 5.x-1.9 and views 5.x-1.6.

bcn’s picture

To reinforce the last comment from thebenedict, I also can report that my problems with max_packet_size errors seem to have been resolved after applying the patch from #218187: Views cache too large.

sun’s picture

Status: Needs review » Closed (duplicate)

Marking as duplicate of #218187: Views cache too large

techczech’s picture

subscribing

pedrochristopher’s picture

patch from #38 worked for me so far. the problem came up when i had 12,500 organic groups. patch at http://drupal.org/node/218187 did not work for me.

k3vin’s picture

subscribing

ronliskey’s picture

subscribing

ramones79’s picture

To merlinofchaos :

Some people get MySQL server has gone away error after enabling the Views module. So they believe that the module itself is causing the problem.

After spent alot of time trying to figure it out - it turns out that for some users the problem may not be directly related to the Views module, but to wait timeout setting in MySQL. On many servers this value is set to a very small value, so the increasing number of added modules to a user's Drupal installation will respectively take more time for the Update status module to check for the updates.
More information here: http://drupal.org/node/490508#comment-1700580

I personally encourage you to continue your efforts in further optimization of the Views module, but just to have that in mind too.

Best regards and thanks for the great module, I love Views :)

dpatte’s picture

Version: 5.x-1.x-dev » 5.x-1.6
Category: support » bug
Status: Active » Closed (duplicate)

i know its an old thread but I am seeing this error occasionally, especially when trying to run cron.
I am using
Drupal 6.2.2
Views 6.2.16

I only have one taxonomy item, but I have many content types with a lot of cck fields including views references.

Any ideas?