Strategies for big core patches.

There have been several discussions in the past about the difficulty of getting large patches into core. And between D8 initiatives and everyone's general plans this is going to be coming up a lot as 8.x gets moving.

Here are a few ideas, that came from a chat with catch, chx, Dries, and webchick.

Schedule patches

Dries has scheduled the /core move patch for November 1st. We could potentially do that for any patch that is going to move thousands of lines of code around. This means people rolling the patch know they have a specific week it needs to be ready for. And everyone else knows that their patches are going to get broken after that week. In both cases it should help cut down on 're-roll hell', which is easier with git but has not gone away. It also could present an opportunity for wide-spread clean-up patches like renaming/moving files, fixing coding standards, documentation standards, etc. since everyone's patches are going to break anyway.

Temporary backwards compatibility layers

dbtng went in with a backwards compatibility layer so that existing calls to db_query() would continue to work temporarily. This allowed the work of converting everything to the rest of the API to happen over time (iirc it took a year or 18 months to actually remove the bc layer). is a much smaller change but the current version of the patch there also contains a bc layer for this reason - keeps the patch at a reviewable size and saves breaking things all at once.

This is not how we have traditionally handled things in the past so could use some discussion. We need to figure out what happens with the follow up work (is it major/critical tasks?). Should we use @deprecated in phpdoc on the old functions to make it clear to people chasing HEAD? Could we throw E_DEPRECATED errors maybe (if we can avoid tests failing due to those).

Temporary hiding of failing tests

In a worst-case scenario, if you can't leave a backwards compatibility layer in while doing major refactoring, another option is to silence the failing tests in optional modules that are not enabled by the standard profile, either by commenting them out or by adding an expectedFail() method to the Testing module. Each of these would require opening a critical task to put back, and thus would go against our thresholds. While this would likely hold up further refactoring, that's kind of the point. And in almost all cases, making similar changes in several modules is something we could tag as "Novice" and point new core contributors at, so they should get cleaned up relatively easily.

If it's broken, roll it back

Our current approach if an unexpected regression is found or 'HEAD is broken' is to all scramble and try to fix it. Instead we could try immediately rolling back the patch (assuming it's not in a stable release yet), fixing whatever the regression is in the issue itself, then re-committing along with the regression tests. This would shift the burden of responsibility from critical core bug, back to just being dealt with in the issue.

This requires a bit of a cultural change, to be accepting that if your patch rolls back it doesn't mean it's not getting in; it just means it can't get in until it's actually ready. :)


The goal of this issue is to create a documentation page in the handbook that outlines the process someone wanting to do major refactoring to Drupal core should go through. Let's discuss strategies and come up with a plan.

Proposed resolution

On August 4, 2011 at 9:43am, Dries set a precedent by posting:

I've looked at this and I'm comfortable with the patch. Given that it may break a lot of patches, I'd like to propose that I commit this on November 1st, 2011.

Other senior developers, committers, and approvers have followed this example by delaying or pre-scheduling large patches that would make many patches need rerolls.

In order to promote harmony and reduce conflict, this policy should be discussed and standardized.

Remaining tasks

Documentation page

Whatever is decided here should be published as a documentation page for reference. This documentation page should be linked from strategic places.

Standard workflow

Provide a standard workflow for coordinating, announcing, and committing such patches. For example, It should be possible for developers to check whether their patch is likely to conflict with a pre-approved patch that is waiting on a scheduled commit. It has been proposed that pre-scheduled core commits be published on the Drupal core and/or the Issue Triage event calendars.

Threshold guidelines

Some feel that published threshold guidelines may help decide whether a patch is big or invasive enough to warrant a pre-announced and pre-scheduled commit. Others think that patches of this size are few enough that a "common sense" approach is all that is needed.

Is it possible for PIFR/PIFT to report the total number of RTBC patches which overlap or conflict with the patch being tested?


podarok’s picture


bleen18’s picture

@catch ... I think we first need to define "big" here...

50k? 100k? 500k?

10 lines? 100 lines? 1000 lines?

A patch that just "feels" big due to its complexity, etc?

..or just, this patch touches a LOT of core (ex. a 3 line patch that reactors the l() function)

jhodgdon’s picture

bleen: it's not so much the size of the patch, as the number of other issues' patches that are in the review queue that would break, I think?

podarok’s picture

I think 30K + - is a big

webchick’s picture

Title:Strategies for big core patches» Strategies for far-reaching core patches

Yeah, let's not base this on size. This is more about "far-reaching" patches. Adjusting title.

What we mean by that is patches that change something in one place that has ripple effects in lots of other places (or potentially every other place).

Here are some examples:

#22336: Move all core Drupal files under a /core folder to improve usability and upgrades
#225450: Database Layer: The Next Generation
#402896: Introduce DrupalCacheArray and use it for drupal_get_schema()
#81461: Clean up the cache API (stop overloading function arguments, remove procedural wrappers)
#1109202: [meta] Fix coding standards violations across Drupal core

moshe weitzman’s picture

IMO all of the proposed approaches are valid in their own way. I'm not really sure what resolution we need here beyond 'use these when needed'.

Jacine’s picture


heyrocker’s picture

I agree with moshe, I think that depending on the patch any of the above approaches (or even some in combination) may be the preferred solution. It just depends.

We should also consider doing these major initiatives in a sandbox branch that merges into core, in addition to a patch workflow. We may not be doing it now but hopefully we will be some day. This can obviously allow a much more iterative workflow without a lot of the downsides above, assuming we can get the testbot into the sandboxes at some point.

heyrocker’s picture

sigh at double-posting

jherencia’s picture

catch’s picture

I think big/far reaching has to be evaluated case by case, I'd usually apply it to "touches lots of files" though.

IMO all of the proposed approaches are valid in their own way. I'm not really sure what resolution we need here beyond 'use these when needed'.

I think that's all we need. Some Drupal 7 patches went through to change 30 different files at once, some got an initial patch in then opened followups - it's not clear to everyone (not even me) which approaches are actually likely to get past. So documenting it somewhere and being clear that you can do a 20kb patch to get something in, then followup with the 400kb patch once the API has been agreed and committed could save people a lot of headache.

One thing we need to consider here is it is going to add to the critical/major task backlog.

#81461: Clean up the cache API (stop overloading function arguments, remove procedural wrappers) just got in. This was a 'normal' task.

It's now spawned one critical task and one major:
#1272686: convert existing cache_* calls to the new cache()-> API
#1272706: Remove backwards compatibility layer for cache API

Both of these should be easy enough to fix but this means if we go over thresholds, one API change that gets in could hold up others until everything else is converted. Of course part of the idea is to break everything down into more manageable chunks and increase throughput, so if that happens hopefully this won't be the case.

Sree’s picture

subscribing ...

Gábor Hojtsy’s picture

My strategy so far in the Drupal 8 Multilingual Initiative is to try and avoid big patches as much as possible. As-in break them down to smaller ones. As the fate of many-many big patches shows, even if people agreed on some sprint or a sandbox or something, it will be debated and dissected in the issue queue anyway. That is my fear with the sandboxes for Drupal 8 that some of us are working with, and that their patches / git pulls would touch many many things at once. I know, I know small patches get comments like "this does not solve x, y and z, let's add those too", while big patches get comments like "this solves a, b and c in wrong ways, let's rewrite half of it". So both cases need "issue management" with an over-arching plan to show that changes will happen as needed later. In many cases, we lost belief that real follow-up work happens for stuff and its hard to predict the commit order for smaller changes, so our workflow moved to bigger patches in many cases.

karschsp’s picture


Crell’s picture

Where possible I would favor BC layers marked @deprecated. Smaller patches are easier to digest. It also results in lots of on-ramp tasks for new contributors, something we lost with the testbot. The follow ups do not need to be critical, but the final bc removal issue should be major, probably.

Paul Simard’s picture

subscribing ...

Crell’s picture

I would also note that "if it breaks stuff, roll it back" is much easier to do if we do it using Git. Git should make it easier to "undo" a commit if you don't go through patches. That is, if we decide to back out a commit it's something that Drieschick can/should do directly, NOT wait for someone to roll an "undo" patch.

pillarsdotnet’s picture

effulgentsia’s picture

thx for this issue. subscribe. sorry, nothing to add at this time.

pillarsdotnet’s picture

Issue summary:View changes

Copied "Proposed resolution" and "Remaining tasks" from duplicate issue #1294298.

pillarsdotnet’s picture

I propose that all pre-scheduled commits of "far-reaching core patches" be published as events on the Drupal core and/or Issue Triage group calendars.

I further propose the following thresholds for "far-reaching", any one of which should be sufficient by itself:

  • Touches at least half the lines of a required core file.
  • Changes the signature of a function called from at least half of all required core files.
  • Changes an API used by at least half of all required core files.
  • Provably conflicts with at least half of all currently RTBC core patches.

Note: By "required core files" I mean files that are loaded by a full bootstrap after installing the Standard profile.

catch’s picture

OK I just tried to schedule #779204: Move simpletest/tests/*.test files into system/tests/*.test since that is a simple one that also moves a tonne of files around, for November 4th.

Those are easy because the only impact is file moving - so they are definitely going to break 100% of patches in the queue, but in such a way that re-rolls will be predictable.

I think this has to be case by case though. Here's some patches in the queue or recently committed that had various kinds of impact for discussion:

#983268: Use @implements Doxygen directive for hook implementations - 270k patch and similar level of impact to the above too (other patches will get broken but the patch itself is mostly scriptable). If we decided to do this I'd want to schedule it so people could avoid wasted re-rolls in the couple of days beforehand etc.

#1272706: Remove backwards compatibility layer for cache API - patch is not that big really, but breaks a commonly used API.

#1018602: Move entity system to a module - breaks (and was broken by while RTBC) any entity system patches, but doesn't affect anything else. Note this patch was not just moving code, it added two new hooks and a hook_help() implementation, no API changes though.

#334024: Use const keyword to define constants instead of define() - medium-sized patch again but touches lots of files as well as changing a coding standard.

Looking at those, the only ones I would want to schedule any distance in advance (i.e. more than 5 weeks or similar) are the ones that are very trivial to roll and commit, and are guaranteed to break dozens or hundreds of other patches. This because the only mental overhead with those is in re-rolling (from both sides), and most other large patches at the moment are part of a series where they may hold up other follow-up work.

When it is more of a subsystem patch (or changing a very specific thing across core - say the CSS files cleanup), then we should try to co-ordinate those as well, but we'll have to see how that goes - actually saying things will get committed on a particular day could easily fall down even on timezones, but I'd rather just increase throughput in the RTBC queue at least for patches that are already at this point, and try to give people a 2-3 day period where "if it's ready it'll get committed and we'll avoid committing stuff that obviously conflicts with it) maybe a week in advance. I don't think those need to be announced though if they're not breaking lots of other people's patches.

pillarsdotnet’s picture

Crell’s picture

+1 on a narrow window for mega-patches rather than specific days, and +1 on increasing RTBC throughput. :-)

Also, for some of these epic patches couldn't they be broken up or done piecemeal over time? Eg, removing the cache BC layer can be done after everything else is converted over to the new API model, and that can be done bit by bit. That's a lot of potential Novice patches for newbies, or just boring stuff for when you need a break from hard patches. Ibid for changing how we document hooks.

That would let us potentially dovetail a lot more patches, as long as we can keep the throughput up, and give more on-ramp opportunities for new contributors.

webchick’s picture

Eh, if it's exactly the same change in 10,000 files, I'd prefer a mega-patch to 10,000 patches, personally. Each patch has to be reviewed, then has to be RTBCed, then has to be committed. Even if these commits are quick, it vastly increases the size of the RTBC queue, ensuring more important patches are missed, and the overhead of doing that in 10,000 issues wastes a lot of valuable time.

webchick’s picture

An example of where a mega-patch might not be preferred, even though it's the same type of change in all files, is something like #711918: Documentation standard for @param and @return data types. Because each of those individual changes needs to be carefully reviewed as to whether it's correct or not. So in that case, I'd do it on something like one patch per module (and then maybe another one for includes/*).

However, things like these:
* #1295456: Add missing '$' character in docblock for DrupalWebTestCase::drupalGetNodeByTitle() function.
* #1295460: Add missing '$' character to docblock for taxonomy_get_term_by_name() function.
* #1295462: Add missing '$' character to docblock for _theme_load_registry() function.

... are really easy to glance at quickly to make sure the change is consistent, so there's no reason to split them apart into multiple patches.

catch’s picture

Yeah I agree with webchick. Implements vs. @implements (if we do it at all, that one's somewhat controversial) would likely be completely scripted for rolling the patch - that's a good reason to just announce it for a specific day and get it over and done with. Same for smaller changes where the resulting patch is still going to be very small and reviewable.

On the cache bc layer I'd actually planned to find new contributors to do module by module + /includes/* but then a new contributor did a massive sprint on all of them in the one issue. The resulting patch was about 26k for everything so that was great.

But in general:

- add new API with bc layer
- convert rest of core module by module + /includes (roughly) in novice patches that are individually reviewed and committed.
- remove BC layer

is a really good pattern if we can do it.

This brings up one question, which is what happens to existing patches in the queue using the old API? Technically those are only 'invalid' once the BC layer is removed, but with dbtng often we marked things CNW if they used the old API and treated that as part of the overall conversion effort. I think unless a patch is actually regressing a conversion we should not worry about that too much. So maybe we can completely avoid scheduling if we do it this way.

pillarsdotnet’s picture

FWIW, I actually *did* find and fix those separately, although in quick succession. I suppose when I found the second one I could have appended to the first instead of opening a new issue, but the new issue seemed more likely to get reviewed/accepted, and saved me the difficulty of having to compose a title that adequately described the expanded scope. Note that the following would not fit into a single title line:
Add missing '$' character in docblock for DrupalWebTestCase::drupalGetNodeByTitle(), taxonomy_get_term_by_name(), and _theme_load_registry() functions.
And the last time I submitted an issue that included "... in various functions" you shot it down as having too vague a scope statement.
EDIT: retracted the last; can't find the issue right now; it probably wasn't webchick and may not even have been Drupal core.

webchick’s picture

pillarsdotnet: You seem unnecessarily stuck on discerning a Single And True Golden Rule Of Unbreakableness that can apply to any and all situations. I just don't think that's possible. The entire crux of this discussion is that big changes are far more nuanced than that, and some classes of changes call for different classes of strategies.

I have no absolutely no idea what the context was of me "shooting you down" in an issue with the title including the words "in various functions." That doesn't sound like me, but hey, maybe I was having a bad day. But in this case, I would've just used the title "Add missing '$' character in docblock variables." or similar, and rolled a single patch that made the correction across core.

pillarsdotnet’s picture

a Single And True Golden Rule Of Unbreakableness

Ah, the Holy Grail of computer programming...

webchick’s picture

Maybe a guideline though is "If it can be scripted, do it in a single patch. If it can't, it must need more careful review, so split it up by module/include file."

Seems like it should be possible to search for and identify all instances of "@param something_without_a_$_before_it\n" in core. Just as it should be possible to script a mass-change of Implements to @implements. Breaking those types of changes up into smaller patches simply adds more work to everyone's plates, and wastes valuable energy saying "Yes this is also a find/replace. RTBC." multiple times.

Adding datatypes to params/returns, OTOH, requires a human to actually go through and check the values and document them, and then another human to verify that they are correct, and also check for typos and the like. Multiple, per-module/file patches make a lot of sense for that. (The patches I've seen lately have been per-function though, which is way, way too granular IMO.)

In terms of patches that introduce new APIs, I'd still prefer to stick to "once the API's in, all new code must conform to it regardless of the presence of the BC layer" mostly because then we eventually whittle the places the BC layer is used down to zero. It's easy to envision the DBTNG conversion taking at least 1/3 longer if we hadn't stuck to that rule. However, I don't feel super adamant about it if you think allowing old uses of the APIs is super valuable.

pillarsdotnet’s picture

Seems like it should be possible to search for and identify all instances of "@param something_without_a_$_before_it\n" in core.

Exactly what I was doing -- only I was looking for type-hinting, not missing dollar-signs. Only found the latter when I did a "sort -u" on the results and said, "Wait a minute -- that's not a legal datatype..."

Found 'em one at a time 'cuz I was sure each time that it *must* be the only one.

(The patches I've seen lately have been per-function though, which is way, way too granular IMO.)

See #1295500-7: Add missing '$' character to docblock for install_profile_info() function.

Can you PLEASE limit your changes to the doc of the function mentioned in the issue statement? Cleaning up a whole file like this , AS YOU WELL KNOW pillarsdotnet, is not OK at this time.

Things like that are why I seek a Single And True Golden Rule Of Unbreakableness.

Y'all argue amongst yourselves; lemme know the conclusion. Preferably in writing. On a docs page.

Crell’s picture

I agree with the heuristics proposed in both #27 and #31. I'd probably be a little more lenient than webchick on existing patches. If a patch is already in the queue when the API change happens, and it's not doing anything to that particular API, then I'd be inclined to not force a reroll just for that. If it's changing that logic, though, it should use the new API.

Eg, a patch that was already in the queue and moved an insert statement isn't really touching the query, just wrapping it in an if statement or something, so no sense beating up on that poor patch writer for it. If the patch is changing the query though, then it may as well get rewritten to the new API while we're at it since we're messing with that query anyway.

A bit of a gray area, I admit, but I don't know that we should force a patch to also be an API update just because it happens to be close to a routine that changed, only if it's actually changing the code in question.

catch’s picture

Yeah the situation in #33 is what I was thinking of. That way we don't invalidate every patch in the queue as soon as the new API lands, but we can still stop patches that would make more work for the conversion - so more or less grandfathering existing patches until the bc layer is removed.

webchick’s picture

Yeah, that seems reasonable. So if the +/- in the patch is identical, just in a different location, you're fine. But if the +/- line changes, you're on the hook for implementing the new API.


catch’s picture

Yeah that works for me. This would mean we can do the API change + bc layer / conversion / bc layer removal workflow without any scheduling at all, which would be ideal. The removal is going to break every single contrib module that hasn't converted yet, but that's what unstable releases are for.

Crell’s picture

Sounds like we're all on the same page.

jhodgdon’s picture

Regarding the "single golden rule" for when patches are too big/too small, I wrote up this:
and filed a new issue so it could get put into the doc:
#1298194: Standard needed for what is a "disruptive" patch

Please comment there and I'll write up the page sometime soon (but not today, too much going on...).

webchick’s picture

Could we not just have that discussion here, since it's quite related, and we already have a few people participating?

jhodgdon’s picture

Sure. I'd like to keep the other issue there though, since I need to remember to make the page. :) Added a comment there to discuss here.

And so as to avoid extra clicking, here's the text of the comments that I'm recommending as a starting point to the "golden rule" page discussion:

a) Patches that do small standards updates to a whole file or to most of a file would tend to make it so that other patches in the queue cannot be applied, if they are not committed. So they can only be committed when the committers are receptive to such patches, such as the first week in November when every D8 patch in the queue is going to break anyway because the core files are being moved into a 'core' directory.

b) Patches that fix the same real issue in about 10 functions are not as disruptive as, and are easier to review and commit than, 10 smaller patches fixing the same real issue in the 10 functions separately.

c) When you are fixing up a docblock for one function, any patches around that area in the file would already be disrupted. So if there is one doc problem in a function doc, it's not more disruptive to, at the same time, fix up the rest of the doc problems in that one function at the same time, and it also doesn't make the patch much more time consuming to review or commit. So I advocate fixing up each function to comply with our current doc standards whenever a doc problem is being fixed for that function.

pillarsdotnet’s picture

Also somewhat related: #1298304: [Meta] Move all externally-developed files to core/vendor/{vendorname}/ directories
Such files should also be excluded from automatically-generated clean-up patches.

webchick’s picture

On #40.a), the only time I can see stalling/scheduling a patch is under the following conditions:

a) It's an automated change that can be scripted, which means it will impact the entirety of core files (since we don't want automated patches split up file-per-file since that's annoying to review). Also means it's easy to re-roll. (Or at least easier than patches that can't be scripted.)


b) The ratio of things changed in the patch is sufficient to break a significant number of patches in the queue. We could try and codify this somehow (> 100K patch? > 50 files touched?), but basically #22336: Move all core Drupal files under a /core folder to improve usability and upgrades and #1109202: [meta] Fix coding standards violations across Drupal core would qualify, "Fix variables without $ in front of them" would not.

If it is not automatable, that means we need to split it up per-module anyway, because it requires human creation/review. And those are far less likely to break everything in the world so could just be committed as they're ready (assuming there aren't more important fixes vying for core maintainer attention in the RTBC queue and we're below issue thresholds).

jhodgdon’s picture

RE #42 - In the past, you haven't been receptive to other far-reaching docs patches, such as a human-generated per-module effort to bring docs headers up to standards (e.g., first lines 80 character full sentence with right verb followed by blank line, space between param and return). At least during the d7 cycle, those issues were postponed by you (webchick) as being too disruptive, and I thought I was being told that the post-"move everything to core directory" was a good time to do those kind of human-done wide-impact patches?

i.e. (b) is satisfied (lots would break) but not (a) (automated). So we split it up into modules, but it wasn't acceptable to do then anyway.

So maybe this does need some further clarification? Otherwise, I'll schedule the "fix up doc headers" sprint for some other time that is more convenient for me...

webchick’s picture

I remember having a discussion about that at DrupalCon, but I don't remember that being any kind of formal mandate. That was also when the RTBC queue was about 100 issues higher than it is right now, and we were still fire-fighting critical D7 bugs.

I think the key there is "more important fixes vying for core maintainer attention" which is hard to codify.

jhodgdon’s picture

Ah, good point. That was late-ish in the D7 cycle and things were insane.

So how about this as an outline:

a) Patches that do doc/coding standards updates to a whole file or to most of a file would tend to make it so that other patches in the queue cannot be applied, if they are not committed. So they can only be committed when the committers are receptive to such patches.

The committers are less likely to be receptive to disruptive patches when:
- It is late in the development cycle for a particular Drupal release.
- There are a lot of RTBC patches in the queue that would need to be rerolled.
- A large and complex patch is close to RTBC and would need to be rerolled.

They are more likely to be receptive to disruptive patches when:
- It is earlier in the development cycle for a particular Drupal release.
- The changes can be automated by a script (in which case, the patch can be generated for all of Drupal in one shot, and regenerated at any time by the script and committed when convenient).
- If they can't be automated, the changes are broken up by module, so they can be reviewed by patch reviewers and committers in smaller chunks.
- The changes are submitted just after another disruptive patch that will require rerolling of a lot of patches anyway.

b) Patches that fix the same small issue in about 10 functions are not as disruptive as, and are easier to review and commit than, 10 smaller patches fixing the same issue in the 10 functions separately. [EDIT: added the word "small" here]

c) When you are fixing up a docblock for one function, any patches around that area in the file would already be disrupted. So if there is one doc problem in a function doc, it doesn't add any additional disruption (or much review time/difficulty) if the patch also, at the same time, fixes up the rest of the doc problems in that one function. So, the standard practice is that when you are fixing a doc problem in one docblock, you should also bring the docblock up to the current doc standards in the same patch (e.g., make sure the first line is a sentence with the right verb in 80 characters, there is a blank line between param/return, lists are formatted correctly, the grammar/spelling is correct, params are all there, and the doc is generally clear).

Lars Toomre’s picture

It helps me to understand better with an example. For instance, it appears that one of the things going into D8 core is type hinting for @param and @return doc strings. Under the proposals above, how would one like to manage simply this type of docblock change for files like or system.module?

All at once changes to such files will require significant manual work to create and review over some period. Meanwhile, other patches will be both prepared and committed that touch these files. Should this type of patch be applied all at once for these file? Or say 10-15 docblock changes at a time?

I ask this because I am a big fan of type hinting and realize it will require significant work. However, I also believe it will help the Drupal community much going forward.

webchick’s picture

As outlined above, I feel these should be done one patch per-module, one patch for /themes, a one patch for /includes.


Each one of these issues needs to go through phases of development, then review, then revision, then review again, then promotion to a core maintainer, then possibly more revision and more review again, and then finally commit. (As long as issue thresholds are met, and as long as there aren't more pressing things in the RTBC queue which would be invalidated by these changes.)

Clearly, doing this in one patch for all of core is way, way too much (unless it can be scripted, which this one can't). It would break too often and fall out of date too quickly, unless we just took an entire week off core development and said that's the only thing we're getting done.

One patch per-function, likewise, is obviously way, way too granular. Drupal 8 has 4,000+ functions which is 4,000 issues that need to go through that workflow, and clutter up the RTBC queue from more pressing fixes.

One patch per-file might seem like a good breakdown, but there are still more than 400 files, which is more than 400 issues that need to be separately reviewed and committed. And worse, they do not break down under logical lines. If I know the Shortcut module really well, I need to track 4-5 separate issues to keep on top of.

If we kept it to one patch per-module, that'd break it up into 40 logical chunks so that people who know, say, Shortcut module could review all of the changes to it at once instead of over 4-5 issues. Plus another for themes (+1) plus another for /includes (+1) means fewer than 50 issues to track for such an initiative, which seems a lot more doable to me.

Lars Toomre’s picture

I asked the above question because I thought it would be helpful to start with a smaller 'test' issue to gain experience with. Since many other patches seem to touch and/or system.module, I thought we might start with just those files and see what the experience and bike shedding is like in moving to final commitment. Does this make sense?

webchick’s picture

Yes. So I would start with something like Poll or Shortcut module, which is both user-facing and much easier to wrap your head around, and also unlikely to be conflicting with other patches.

pillarsdotnet’s picture

Note that #711918 is still undecided. We lack sufficient buy-in from core developers, according to this comment.

jhodgdon’s picture

So just to clarify ... I had been planning to set up an API docs sprint in early November (after the "move to core" patch lands) to clean up some docs standards core-wide. Are you saying we don't need to wait for then to do this work, but could do it now (and potentially for d7/8 both)?

This is not the "add type hinting to docs" cleanup, but a less ambitious (and less difficult to review) sprint to get first line descriptions up to standards (verbs, 80 characters), and blank lines between params and returns, and possibly other cleanup stuff (indentation in params and stuff like that that may be lingering).

On the "one per module, one for themes, one for includes" idea... For modules and themes, that is probably fine. For includes... We have about 58,000 lines of code in about 90 include files. So the "all of includes" issue and patch would be huge. But I guess that's only a factor of two or so bigger than modules/system (22K lines, 32 files). Themes have a lot of files, but not that many lines of code... So, OK. Or we could perhaps decide to do "includes A-L", "includes M-Z", "includes subdirectories database and filetransfer", and break it up into three slightly more manageable chunks?

jhodgdon’s picture

Issue summary:View changes

Updated "Remaining Tasks" per IRC discussion.

Crell’s picture

Did any of those remaining tasks happen? Are any no longer relevant?