Support for Drupal 7 is ending on 5 January 2025—it’s time to migrate to Drupal 10! Learn about the many benefits of Drupal 10 and find migration tools in our resource center.
Hi,
We are using Tome module on a very large website. When we export the static version we quickly reach a point where the database just won't response anymore and make all fail during static export:
SQLSTATE[HY000]: General error: 2006 MySQL server has gone away: CREATE TABLE {cache_config} (
`cid` VARCHAR(255) BINARY CHARACTER SET ascii COLLATE ascii_general_ci NOT NULL DEFAULT '' COMMENT 'Primary Key: Unique cach
e ID.',
`data` LONGBLOB NULL DEFAULT NULL COMMENT 'A collection of data to cache.',
`expire` INT NOT NULL DEFAULT 0 COMMENT 'A Unix timestamp indicating when the cache entry should expire, or -1 for never.',
`created` DECIMAL(14, 3) NOT NULL DEFAULT 0 COMMENT 'A timestamp with millisecond precision indicating when the cache entry
was created.',
`serialized` SMALLINT NOT NULL DEFAULT 0 COMMENT 'A flag to indicate whether content is serialized (1) or not (0).',
`tags` LONGTEXT NULL DEFAULT NULL COMMENT 'Space-separated list of cache tags for this entry.',
`checksum` VARCHAR(255) CHARACTER SET ascii COLLATE ascii_general_ci NOT NULL COMMENT 'The tag invalidation checksum when th
is entry was saved.',
PRIMARY KEY (`cid`),
INDEX `expire` (`expire`),
INDEX `created` (`created`)
) ENGINE = InnoDB DEFAULT CHARACTER SET utf8mb4 COMMENT 'Storage for the cache API.'; Array
(
)
Did you already face similar issues when exporting something like more than 500 pages at a time? Because we target more that 20000 pages. The whole rebuild might be needed when updating the menu for example.
We will be happy to help you fixing performance issues.
Comment | File | Size | Author |
---|---|---|---|
#13 | interdiff-3020504-9-13.txt | 8.1 KB | samuel.mortenson |
#13 | 3020504-13.patch | 9.83 KB | samuel.mortenson |
|
Comments
Comment #2
samuel.mortenson@jlatorre Thanks for the issue! This is a common problem with Tome, and from what I can tell is just Drupal/MySQL not being able to handle so many concurrent requests. I've been doing some light performance testing and profiling with Tome commands and, so far, Drupal is always the bottleneck.
In the short term, I would recommend tweaking the
--process-count
and--path-count
options to find a combination that doesn't overload your site. A lower process count and higher path count would mean less concurrent MySQL queries, which may help you here.In the longer term, I think some paths forward here are:
1. Continue to profile and audit Tome performance to see if there's something we can improve - I'd love any help here, and would welcome any tweaks to improve performance.
2. Write a patch to retry failed
tome:static-export-path
invocations at least one time, and maybe make the retry count configurable.Comment #3
jlatorre CreditAttribution: jlatorre commentedThanks for the reply, we are gathering information about this and will work on it!
Comment #4
jlatorre CreditAttribution: jlatorre commentedHere is a first try, what would you recommend based on what I'v done?
Comment #5
jlatorre CreditAttribution: jlatorre commentedre-uploaded it because of indent fail... and added phpDoc missing
Comment #6
samuel.mortensonThanks for the patch @jlatorre! This is basically how I would have implemented adding a configurable "retry-count" option to the static command. Do you think this change is going to be enough to get around your "MySQL server has gone away" issues?
Here's some review of your patch:
There are other commands that use this trait in tome_sync, they will also need updated so that they continue to function.
Why stop the current process here?
Can we use strict equality here, i.e. ===?
Thinking through the logic here, if the retry count was set to 2, I think this loop would only retry one time.
Loop 1 - $retry 0, command is executed for the first time.
Loop 2 - $retry 1, command is retried one time.
Loop 3 - $retry 2, command is not retried again and this error is shown.
This lines shouldn't be changed - guessing the patch was just made against an older release of Tome.
This newline can be removed.
Add a newline above this block comment.
Comment #7
jlatorre CreditAttribution: jlatorre commentedThanks for the review! This helps a lot! Did a few modification on the patch.
I'm not sure this will end all "MySQL has gone away" errors but at least this could could lead to less missing path (temporary error when accessing a path for example).
About this part, I'm not sure about it but according to me you only trigger the while loop only after the command has been run once. So this would make sense.
This would make me think :
Loop 1 - $retry 0, command has already been executed for the first time -> start the process again $retry = 1
Loop 2 - $retry 1, command has failed a second time -> start the process again, $retry = 2
Loop 3 - $retry 2, command is not retried again and this error is shown.
Comment #8
jlatorre CreditAttribution: jlatorre commentedComment #9
samuel.mortensonFor the retry count - users will expect that if the retry-count option is set to "2", a failing command will be executed three times. One for the first execution, and then two retries. Does that make more sense?
Comment #10
jlatorre CreditAttribution: jlatorre commentedJust changed retry begin count by -1 instead of 0. I think this effectively make more sense if retry means how many tries after a fail you want.
Comment #11
jlatorre CreditAttribution: jlatorre commentedComment #12
samuel.mortensonThis is looking good, thanks again @jlatorre. I'll tweak some things on commit but this should get into the next release.
Comment #13
samuel.mortensonAfter testing this a bit, I found that Drush can have error output but still return 0, which means you have to check the error output as well.
Also, this while loop is blocking - meaning that retries of different processes can never happen in parallel.
Here's a new patch which moves all the retry checking to another callback before the normal process filter callback, and fixes some errors I found when testing #10.
Comment #14
samuel.mortensonComment #16
samuel.mortensonThis is in now - thanks for the help!