Haven't tracked down yet why, but I have tested it with a large database (600k nodes) and first it took forever to run through all nodes (#2015277: Reduce the number of indexes on the node_field_* tables should help with that) and then it went on and on even though I had a row in node_field_data for every node. So somehow, the calculcated max didn't match the actual rows, maybe stale rows in node_revision that did not return anything when joined with node?

Should be easy to check if we got less than 50 results back and if so, force it to be finished.

Comments

plach’s picture

#1998366: [meta] SQLite is broken is going to heavily touch node_update_8017(). Can we close this as a duplicate or at least postpone it?

Berdir’s picture

Issue tags: +Needs tests

I'm fine with postpone but I think it makes sense to keep this open so that we don't forget about it. As it's not really related to that other issue.

I'll see what I can do about adjusting the upgrade path to trigger this problem..

plach’s picture

Status: Active » Postponed

Ok, but given that we are going to touch exactly that code we might have a chance to fix this over there. If you can track the cause here maybe we can merge the issues :)

mgifford’s picture

Issue summary: View changes
Status: Postponed » Active
Related issues: +#1998366: [meta] SQLite is broken

Ok, so now that #1998366: [meta] SQLite is broken is in, what else needs to happen so that this isn't forgotten?

amateescu’s picture

Status: Active » Closed (cannot reproduce)

node_update_8017() doesn't exist anymore :)