I wasn't sure where exactly to put this issue. Feel free to move it to a more appropriate place.
Problem/Motivation
Because we currently do not have the core maintainer capacity to review and commit all RTBC patches within a day or two, a lot of conflicting patches are RTBC'ed and nobody notices patches become inapplicable until after a core maintainer went through its issue and tried to apply the patch. This, in turn, is a waste of valuable core maintainer time and causes patches to be committed even more slowly.
I have been manually re-testing patches that are five or more days old for the past one or two weeks and I have been able to re-roll some of the ones that failed, but ideally the process of re-testing patches should be automated.
Proposed resolution
Let drupal.org automatically re-test every RTBC patch once a day, so developers can quickly re-roll the ones that no longer apply before a core maintainer accidentally spends time on it.
Comments
Comment #1
alexpottIn my opinion this is only valuable if a comment is only posted back to the issue if it fails. Which also brings up the problem that we still have too many random fails on testbot due to environment issues. Not posting a comment means that issue would not be artificially bumped and processing the RTBC queue in a FIFO (first in, first out) fashion is still possible.
Comment #2
XanoWe could possibly address that by disabling the automatic foobar.patch queued for re-testing. comments altogether. Does anyone think it's relevant that a patch is being re-tested all, or are people only interested in knowing if it has failed?
Comment #3
tim.plunkettI think those comments should only be generated if the issue was "needs work" when retesting. It would move it to "needs review".
Comment #4
YesCT CreditAttribution: YesCT commentedComment #5
catchWidening scope of this to CNR patches as well. There are currently around 1400 patches CNR against Drupal 8, the oldest haven't been updated for two years.
Apart from patches that are under heavy architectural development, patches shouldn't really ever be at CNR - because that means they're waiting on reviews. Automating the 'does not pass tests', 'does not apply' part of that review ought to help focus on those patches which are viable - and ensures that patches sitting waiting have a chance to be updated by the author when they go stale, before someone else looks at them.
Comment #6
YesCT CreditAttribution: YesCT commentedIt is useful to know when a patch has been retested and passed (or even just applied). This helps narrow down the best place to rebase from for a reroll. Can we *append* testing results instead of making new comments for passes?
Are we assuming that needs review and needs work are mutually exclusive? #2013222: Add "Issue tasks" to project issues and correlate tasks with handbook documentation starts to add more statuses, but I dont see a generic needs review there... I thought there was (and that something could be needs review and needs work at the same time), but there isn't.
Comment #7
catchCould that be handled by adding the tested date into the results next to the file?
I haven't been following that issue, but regardless of any changes, there must be a situation where an issue has all tasks done but doesn't go straight to RTBC (i.e. the patch author posts a new patch that fixes a test failure). In which case automated re-checking could bump the patch out of that status into 'need reroll' or whatever.
Comment #8
drummWe have daily re-testing of 8.x core patches now. I think this issue needs an update.
Comment #9
catchI think this is covered by #2233265: [Policy] Frequent re-testing of RTBC/CNR patches for whether they apply and/or linting and #2252787: Automatically reroll patches that do not apply (post new patch that has a successful merge with no conflicts).