Support for Drupal 7 is ending on 5 January 2025—it’s time to migrate to Drupal 10! Learn about the many benefits of Drupal 10 and find migration tools in our resource center.
Problem/Motivation
When you have a huge numbers of queue's after full regenerate for simple_sitemap memory limit can be reached.
For our case it occurs for 300k+ items in queue.
SELECT count(*) FROM queue where name LIKE '%simple_sitemap%'
======
count(*)
334491
Steps to reproduce
1. Generate the content to have a 300k+ records for queue
2. Run drush ssr
to regenerate entities for sitemap
3. Run drush ssg
Proposed resolution
Load queue items in chunks
Remaining tasks
User interface changes
API changes
Data model changes
Comment | File | Size | Author |
---|---|---|---|
#12 | simple_sitemap-yield_memory-3231298-12.patch | 1.2 KB | maximpodorov |
#2 | 3231298-1.patch | 1.19 KB | dmitry.korhov |
Issue fork simple_sitemap-3231298
Show commands
Start within a Git clone of the project using the version control instructions.
Or, if you do not have SSH keys set up on git.drupalcode.org:
Comments
Comment #2
dmitry.korhovComment #3
dmitry.korhovComment #9
dmitry.korhovComment #11
gbyte CreditAttribution: gbyte as a volunteer and at gbyte commented3.x is at feature freeze and 4.x has been deemed good enough for non-api-dependant use cases. Let's test the necessity of this in 4.x and implement if necessary.
Comment #12
maximpodorov CreditAttribution: maximpodorov commentedRe-rolled for 4.x-dev.
Comment #13
Berdirare you sure the offset makes sense?
aren't finished queue items removed when completed successfully? According to \Drupal\simple_sitemap\Queue\QueueWorker::generate, they are.
that means you load 1-50, then skip 51-100, load 101-150, then skip 151-250 and so on.