Support for Drupal 7 is ending on 5 January 2025—it’s time to migrate to Drupal 10! Learn about the many benefits of Drupal 10 and find migration tools in our resource center.
The comment above FeedsProcessor::getLimit() states that 0 means unlimited. On the sites I run, we set the limit to 0 because we perform all our feed processing in scheduled cron runs, so limiting the number of items that get processed in one run just increases the time it takes to process larger updates.
FeedsProcessor::clear() and ::expire() pass the return value from getLimit() directly to $select->range() as the second argument. When getLimit() returns 0, this results in a range from 0 to 0, which means no items get cleared or expired.
Comments
Comment #2
MegaChriz CreditAttribution: MegaChriz as a volunteer commentedI see. Here are the relevant code lines.
From
FeedsProcessor::clear()
:From
FeedsProcessor::expire()
:If we simply skip the call to
range()
whengetLimit()
returns0
, PHP could potentially run out of memory if the list of entity ID's is very large. So maybe the code should be wrapped inside a while loop when the limit is0
? Suggestions or patches are welcome.Comment #3
j.matthew CreditAttribution: j.matthew commentedI think that's one of the factors you have to take into consideration before setting feeds_process_limit to 0, and certainly something you'd have to think about in general if you were processing feeds so large that just storing the entity IDs takes a significant amount of memory.
Comment #4
MegaChriz CreditAttribution: MegaChriz as a volunteer commentedIt's not just the entity ID's. The entity ID's are passed to
FeedsProcessor::entityDeleteMultiple()
. The node processor then passes this tonode_delete_multiple()
. And that function first loads all nodes for which ID's are passed. See https://api.drupal.org/api/drupal/modules%21node%21node.module/function/.... That could take a huge amount of memory.Comment #5
MegaChriz CreditAttribution: MegaChriz as a volunteer commentedI see that besides the potential memory issue, there is also a limited amount of entity ID's that can be safely passed around, see #1210606: Document that operations that delete in bulk can hit limits for the number of arguments.
I think we should fix this issue with a solution similar to the patch in #1210092-3: PDOException: SQLSTATE[HY000]: General error: 1 too many SQL variables: SELECT t.* FROM {field_data_body} t WHERE (entity_type =.