Haven't tracked down yet why, but I have tested it with a large database (600k nodes) and first it took forever to run through all nodes (#2015277: Reduce the number of indexes on the node_field_* tables should help with that) and then it went on and on even though I had a row in node_field_data for every node. So somehow, the calculcated max didn't match the actual rows, maybe stale rows in node_revision that did not return anything when joined with node?
Should be easy to check if we got less than 50 results back and if so, force it to be finished.
Comments
Comment #1
plach#1998366: [meta] SQLite is broken is going to heavily touch
node_update_8017()
. Can we close this as a duplicate or at least postpone it?Comment #2
BerdirI'm fine with postpone but I think it makes sense to keep this open so that we don't forget about it. As it's not really related to that other issue.
I'll see what I can do about adjusting the upgrade path to trigger this problem..
Comment #3
plachOk, but given that we are going to touch exactly that code we might have a chance to fix this over there. If you can track the cause here maybe we can merge the issues :)
Comment #4
mgiffordOk, so now that #1998366: [meta] SQLite is broken is in, what else needs to happen so that this isn't forgotten?
Comment #5
amateescu CreditAttribution: amateescu commentednode_update_8017()
doesn't exist anymore :)