Support for Drupal 7 is ending on 5 January 2025—it’s time to migrate to Drupal 10! Learn about the many benefits of Drupal 10 and find migration tools in our resource center.
Shouldn't deletion of a node also remove the rows in node_access with the matching nid? Leaving them behind just makes the node_access table larger than it needs to be, eventually adversely affecting performance.
Comment | File | Size | Author |
---|---|---|---|
#27 | 118108.patch | 857 bytes | bleen |
#25 | 118108.patch | 533 bytes | bleen |
#17 | node_access.patch | 780 bytes | bleen |
#15 | node_access.patch | 1.52 KB | bleen |
#8 | nodeaccess.patch | 748 bytes | bleen |
Comments
Comment #1
moshe weitzman CreditAttribution: moshe weitzman commentednode.module manages the node_access table by arbitrating all writes. moving to core
Comment #2
cburschkaSo should the patched node_delete do
DELETE FROM {node_access} WHERE nid=%d
itself, or should it callnode_access_write_grants($node, array());
?Ready to patch...
Comment #3
cburschkaAlso, how about adding a node_update_N() that retroactively cleans {node_access}?
Comment #4
moshe weitzman CreditAttribution: moshe weitzman commentedit should delete by itself. an update function makes sense if one query can just delete all the orphans. if you have ot iterate over all nodes, then a node_access_rebuild() is in order.
Comment #5
cburschkaDoing this in one query would require an outer join followed by deleting all the entries that are joined with null. However, I don't believe it's possible to join and delete in the same query (my SQL-fu is weak). When I need to do something similar, I first run a joined update that marks the entries as "deletable", then delete the marked fields in a second query, which still avoids iterating.
Compatibility with PostGre is another matter, but I've been able to get MySQL 4 and 5 to do this:
This could be called a hack, though, so I'm not sure.
Comment #6
wedge CreditAttribution: wedge commentedDELETE FROM {node_access} WHERE nid NOT IN (SELECT nid FROM {node})
?Comment #7
bleen CreditAttribution: bleen commentedIt appears this is still a problem in D7
Comment #8
bleen CreditAttribution: bleen commentedThis should do it ... when this gets committed I'll write a backport.
Comment #10
bleen CreditAttribution: bleen commented#8: nodeaccess.patch queued for re-testing.
Comment #11
bleen CreditAttribution: bleen commentedbump
Comment #12
moshe weitzman CreditAttribution: moshe weitzman commentedNice catch.
Comment #13
Dries CreditAttribution: Dries commentedCommitted to CVS HEAD. Thanks.
Comment #14
bleen CreditAttribution: bleen commentedThis should be ported back to D6 ... Ill slap a patch together this weekend. One question ... should I include an hook_update with the query from #6 with the patch? Thoughts?
Comment #15
bleen CreditAttribution: bleen commentedThis is a patch for d6 ...
Comment #16
moshe weitzman CreditAttribution: moshe weitzman commentedThat update function could take a long time on a huge site. Anyone concerned about that? We could consider fixing the bug but not pruning the table.
Comment #17
bleen CreditAttribution: bleen commentedthis is a patch without the update function ... I think that an update would be useful, but probably better safe than sorry
Comment #19
bleen CreditAttribution: bleen commented#17: node_access.patch queued for re-testing.
Comment #21
bleen CreditAttribution: bleen commentedhmmm ... any idea why this wont apply?
Comment #22
sixty4rpm CreditAttribution: sixty4rpm commentedSorry to be a pest -- I haven't seen any news on this effort for awhile. No big emergency, it's easy enough to delete the orphan rows by hand in the meantime, but I'm wondering if there is a patch or solution that I have not seen?
Cheers!
Comment #23
bleen CreditAttribution: bleen commented#17: node_access.patch queued for re-testing.
Comment #24
Alexander Allen CreditAttribution: Alexander Allen commentedI'm working in a system where there are half a million nodes, and because of this bug there are around two million entries in the node access table. Looking at two million rows on each page view is adding significantly to database overhead. Let me know if I can help with this.
Comment #25
bleen CreditAttribution: bleen commentedlets try this patch
Comment #27
bleen CreditAttribution: bleen commentedmaybe this one....
Comment #29
naxoc CreditAttribution: naxoc commentedThe bot complains that the patch does not apply. It does apply and look fine to me, so I am RTBC-ing it.
Comment #30
lcollet CreditAttribution: lcollet commentedDear all,
Personnally, I used hook_node_grants, hook_node_access_records and to avoid having dead rows in my databse, I just implemented the hook_nodeapi to solve this problem. See below.
It prevented me touching the core and it is implemented quite easily! :p
Comment #31
Gábor HojtsyI agree it is fine to not do an update function here. Better be safe. Committed the last patch. Thanks!