I noticed that some of my larger background_process_http_request
s were failing. I looked through the issue queues and found various suggested fixes and work-arounds. After implementing all of them, things are working again, but the moment, I'm not sure which of these is critical.
Is there a way to enhance the background_process module so that there is more detailed logging about the reasons for failed requests?
Here are the (working) fixes I've applied:
(0) At admin/config/system/background-process
: Change the connection timeout and stream time out to 60, set the redispatch threshold to 65.
(1) From https://drupal.org/node/1443264 -
Add this to settings.php
// OUT OF MEMORY PROTECTION
// To reduce the risk of shutdown handlers failing, we reserve 4mb of memory
$GLOBALS['__RESERVED_MEMORY'] = str_repeat('0', 1024 * 1024 * 4);
register_shutdown_function('_out_of_memory_protection');
function _out_of_memory_protection() {
unset($GLOBALS['__RESERVED_MEMORY']);
}
(2) From https://drupal.org/node/1627924 -
In MySQL:
ALTER TABLE background_process ENGINE=MYISAM;
and/or add this to settings.php (after the definition of $databases):
$databases['background_process'] = $databases['default'];
Comment | File | Size | Author |
---|---|---|---|
#5 | background-process-error-handling.JPG | 70.62 KB | RAWDESK |
Comments
Comment #1
holtzermann17 CreditAttribution: holtzermann17 commentedUpdate: It seems like everything is taken care of just by the one line:
...
Comment #2
gielfeldt CreditAttribution: gielfeldt commentedHi
Sorry for the late reply.
If that's the case, then it looks like you're the victim of transactions. The most common case of this, is when you launch a background process inside hook_node_insert/update().
Regarding this
In MySQL:
ALTER TABLE background_process ENGINE=MYISAM;
and/or add this to settings.php (after the definition of $databases):
Only one of these two should be necessary, as either of them will bypass the transactional state of the default database connection.
Regarding better debugging/logging, this is very much on my mind. I plan to incorporate this into 2.x.
Comment #3
holtzermann17 CreditAttribution: holtzermann17 commentedThat is indeed the case with me. Thanks!
Comment #4
gielfeldt CreditAttribution: gielfeldt commentedI plan to implement better logging in 2.x
Comment #5
RAWDESK CreditAttribution: RAWDESK commentedHi all,
As of today i am using background_process_start_locked() to asynchronously post commerce orders to our shipping partner.
As they cannot guarantee 100% availability of their web services, i need to build in :
1. This background_process driven http_client service
2. a 60 seconds execution timeframe before handling a time-out event
3. resume the timed out locked request until it succeeds
Since i am new to setting up this kind of processing, i have some questions also regarding logging/diagnostics for failed requests :
- Are there already efforts made to incoporate better logging ?
- where can i find guidelines in documentation for my point 3. ? how to configure my settings so it ensures redispatched requests aren't accepted multiple times at my shipping partners web service ?
- the shipping partner's web service also throws a 503 or 504 http status error. On these events, i would like to resume the locked request. Is there a way to 'handle' this from within the called function somehow ?