I'm currently upgrading a D5 site to D7 and ran in to an error running taxonomy_update_7005(). I found several threads about this suggesting to look for nodes with negative dates or to increase max_execution_time, neither of which helped. I added some debugging and found that it would fail at a different node each time and therefore I concluded it was a timeout or memory issue.

After several hours of changing settings and applying patches, such as this one from #1549390: taxonomy_update_7005() can be faster the only way I was able to get this to run was to set $batch_size to 100 instead of 1000. It ran painfully slow and I gradually increased it to 350 and finally succeeded.

The exact error I got was

The update process was aborted prematurely while running update #7005 in taxonomy.module. All errors have been logged. You may need to check the watchdog database table manually.

There was no useful information in the watchdog table. I had increased max_execution_time from 60 to 300 to 600 to 2400 all to no avail. My php memory limit was 128M.

Marking as major as the upgrade path was broken for this site. I can PM someone a database dump if they want to take a look and see if it fails on their machine.

Obviously reducing the limit arbitrarily is not ideal, but I'm wondering if someone could shed some light on to what it is that might be failing with batch size of 1000.

CommentFileSizeAuthor
#3 update_php_network_tab.png95.79 KBmstrelan
Support from Acquia helps fund testing for Drupal Acquia logo

Comments

catch’s picture

Component: database update system » taxonomy.module
Status: Active » Postponed (maintainer needs more info)
Issue tags: +D7 upgrade path

It's possible you were hitting Apache's Timeout if things were timing out quicker than max_execution_time kicks in. Some shared hosts set this very low (see discussion on #686196: Meta issue: Install failures on various hosting environments where Drupal 7 install was failing for example).

Can you check my.cnf and paste the line for innodb_flush_log_at_trx_commit ? If it's set to 1 then that's going to make everything run very slow when doing a lot of inserts, quite possibly enough that setting it to 0 or 2 would allow your update to complete.

Fixing tags a bit and marking needs more info. We could reduce the batch size, but I'd like to rule out at least the two issues mentioned above before doing that.

mstrelan’s picture

mysql> SHOW VARIABLES LIKE 'innodb_flush_log_at_trx_commit';
+--------------------------------+-------+
| Variable_name                  | Value |
+--------------------------------+-------+
| innodb_flush_log_at_trx_commit | 1     |
+--------------------------------+-------+
1 row in set (0.00 sec)

I just found #817398: Provide a requirements warning when innodb_flush_log_at_trx_commit is set. I had no idea this could be so bad. I'll see if I can get some time to re-run the upgrade and test this out. Thanks!

mstrelan’s picture

Status: Postponed (maintainer needs more info) » Active
FileSize
95.79 KB

Wow, setting innodb_flush_log_at_trx_commit to 0 fixed this issue. I've attached the original network trace showing it failing at exactly 1 minute every time. I think we need to do something to prevent the timeout, eg if it's taking too long then reduce the batch size, but it should be documented somewhere obvious that this mysql setting should be set.

marcingy’s picture

Status: Active » Closed (fixed)

This sounds fixed.

mstrelan’s picture

How is it fixed? It is fixed for me, but surely many other site builders are going to come across this, since you know, it's the default setting.