Proposal to change the project application review and promotion process

All Projects

All projects, new and existing, will be scanned regularly by Drupal.org code review tools. The results of these scans will be made available to project maintainers and will be used to display pertinent information on project pages. The information may include security warnings, error counts, or other information related to project health. This scan will also be used on all projects before they are promoted to full status.

Projects pages will contain text explaining the Security Team’s responsibility for development releases. This text will be part of the downloads section. Drupal’s Update Status report will also display a warning when running the development version of a project.

Project Promotion

All users, regardless of their Git-vetted status, will follow this process:

  1. Create a sandbox project.
  2. Initiate promotion of the project, which will mark the project as ready for an automated scan. Only the owner of the sandbox can initiate the promotion process.
  3. If the project passes the automated scan, the project will be given full status.
  4. Choose a namespace for the project.

Non-vetted Users

For non-vetted users, the following restrictions apply:

  • Only one sandbox project can be promoted to full status.
  • The promoted project will have development snapshots, but cannot have tagged releases. (beta, rc, 1.x, etc.)
  • The project page will have a visible indicator that the project is maintained by a non-vetted user, including text that the project is not subject to Security Advisories.

Requesting Vetted Status

When a user completes the project application review process, the restrictions above are removed. The process is as follows:

  1. Submit a project application request in the Project Application queue. The project submitted for review should have already been promoted to full status, and passed Drupal.org’s automated scan.
  2. Reviewers will manually review the project, checking only for problems related to licensing, security, or major problems with Drupal api use. Applications will not be blocked for minor coding standards violations, bugs, or project duplication.

Existing Vetted Users

  • Users that already have vetted status will not be affected at this stage, other than now having access to automated review of their code before making available for release

BACKGROUND

As many people in the Drupal community know, the current project applications process is not ideal, and can be a huge barrier for new contributors. The rate of incoming applications exceeds what our limited number of volunteers can review and approve, which results in people waiting months to create full projects on Drupal.org.

Recent review automation and bonus programs created by klausi were a big improvement; however, they weren’t able to fix the problem completely.

For the past few months, a number of people in the community have been discussing possible ways to improve the situation. Conversations have occurred between the Technical Working Group, Drupal.org Software Working Group, Security Working Group, and frequent project application reviewers.

As a result of those efforts, we’d like to introduce a proposal that was approved by all of the above Working Groups.

First of all, what are the goals of the changes we’d like to introduce?

  • Reduce the feeling there are two tiers of users; those who can contribute code freely, and those who cannot.
  • Prevent insecure and poorly coded modules from being published on Drupal.org. This is a major concern for the Security Working Group, which is responsible for Security Advisories for contributed projects, as well as core.
  • Let sandbox users get a release that is installable. This is one of the biggest desires of users waiting for approval. They want a release for their module they can link to and share.
  • Let sandboxes reserve a namespace so that every project has a proper project page and url.
  • Reduce the amount work a review requires by reducing the rigor with which reviews must be completed.

We recognize that this proposal won’t make the process perfect; but it is a step in the right direction. After these changes are implemented, we will continue exploring options to make the process even better.

Comments

YesCT’s picture

dawehner’s picture

Only one sandbox project can be promoted to full status.

I'm not sure what we gain from that? It just makes it more complex to contribute, and would lead to annoyed people.

davidhernandez’s picture

@dawehner, that is only for non-vetted users. Everyone can still create as many sandboxes as they want. Vetted users can create as many full projects as they want. Non-vetted users will be able to create one full project. Currently they can only create sandboxes. The idea being that new users can at least jump right in while they get through the approval process.

mpdonadio’s picture

For those reading, I am currently one of the current Review Admins. Overall, I think this is a good step forward.

A few quick comments.

If the project passes the automated scan, the project will be given full status.

How do false positives get dealt with? One common issue we deal with is that users will see "errors" with the current tools, and just adjust code until it passes. The biggest problem with this is the 80-char limit, and when users try to "fix" this they usually end up with translatable strings broken up across multiple lines.

Reviewers will manually review the project, checking only for problems related to licensing, security, or major problems with Drupal api use. Applications will not be blocked for minor coding standards violations, bugs, or project duplication.

This has really been the official policy the whole time, and what the admins follow. The problem has been with peer review where non-admins will push project back to Needs Work over silly things.

I also want to mention that I would estimate that 75% of projects I look at for the first time (or haven't been looked at by an experienced user) have security problems. I think that if an admin finds a security problem with a project from a non-vetted user, that module should be able to be demoted until the problem is fixed.

cilefen’s picture

Can an automated scan really find security problems?

klausi’s picture

To summarize the changes for reviewers and git admins: We should not block applications based on module duplication, so we abandon the "collaboration over competition" rule. The rest stays the same.

I personally will still mention and recommend that applicants work with the existing projects instead of forking them, but I will not block them based on that anymore.

DamienMcKenna’s picture

Two quick things:

1. If non-vetted users can't tag releases for their modules, why limit the user's ability to create them?

2. Wouldn't advertising the fact that the module is maintained by a non-vetted user just add to the perception of there being two tiers? The simple fact that there wouldn't be a tagged release should be sufficient for many to avoid the module.

gisle’s picture

The promoted project will have development snapshots, but cannot have tagged releases.

Currently, only tagged releases can have translations on https://localize.drupal.org/ . For non-English users, this make snapshot-only projects very much second class.

I am not really familiar with the translation process, but AFAIK, this limitation is there for a technical reason and may not be lifted without some re-engineering of https://localize.drupal.org/.

dman’s picture

I like the addition of at least dev tarballs for sandbox projects. A few times I (as a vetted user) have had to promote my own temporary sandbox projects to full - mainly to make them easily downloadable for others - though I didn't *really* want them to be any more than sandbox. node_summary_token is one like that.

The rest of it sounds fine. The addition of built-in automated code scans on the uploaded code bases will be handy for everyone, I think!

DamienMcKenna’s picture

Could we also force-on patch testing? Right now it's optional yet so many projects would benefit from it.

davidhernandez’s picture

Regarding questions about the code scan, some of the implementation details will need to be worked out. We will need to do rounds of testing, and figure out what the most appropriate starting point is. I can't claim to know exactly what the final result will look like.

@cilefen, yes, the scan can find security problems. It won't find less obvious vulnerabilities, but that is why we still need to have a manual review. Hopefully, in time, the scan will improve and we can rely less on the human component.

@DamienMcKenna, 1. There are concerns beyond tagged releases. For example, namespace squatting. We may loosen the restrictions more in the future, but feel more comfortable loosening up slowly. If we take too big a step, we may not be able to roll back. 2. The Security Team is very concerned about making it as clear as possible that the project is not subject to security advisories. "The simple fact that there wouldn't be a tagged release should be sufficient for many to avoid the module." Unfortunately, many of us have learned that this is not the case. People will use any version they find available.

The force patch testing is something we can consider. Thanks for the suggestion it.

@gisle, we'll have to consider the localize affect, which will have to be looked at more carefully during implementation. Thank you for bringing it up.

David_Rothstein’s picture

Looks like a really good direction overall.

The project page will have a visible indicator that the project is maintained by a non-vetted user, including text that the project is not subject to Security Advisories.

Any warning stating that the project is not subject to security advisories should be placed on all projects that are not subject to security advisories, not just ones by non-vetted users. Otherwise we risk implying that all projects by vetted users are covered by the Security Team, when they actually all aren't. (See also #786702: Add information about releases covered by security advisories to project pages near download table, which is closed at the moment but not for a particular reason; I'll reopen it.)

This would also address DamienMcKenna's concern in #7; there would be no need to treat the non-vetted projects specially or have any visible indicator labeling them as such.

Projects pages will contain text explaining the Security Team’s responsibility for development releases. This text will be part of the downloads section. Drupal’s Update Status report will also display a warning when running the development version of a project.

Similar to the above, it's important to make sure this mentions all releases that the Security Team doesn't cover (not just development releases). And not all development releases are created equal, either. Even though all of them are technically unsupported by the security team, a development release that is newer than the module's newest stable release is practically-speaking almost supported (since any security issues it has are very likely in the last stable release too and therefore covered), and certainly better than using alpha/beta/RC releases in that regard.

David_Rothstein’s picture

I like the addition of at least dev tarballs for sandbox projects. A few times I (as a vetted user) have had to promote my own temporary sandbox projects to full - mainly to make them easily downloadable for others - though I didn't *really* want them to be any more than sandbox.

Maybe I read it wrong, but I didn't see anything in the current proposal that would change this. I think you would still need to promote the sandbox to a full project in order for a dev tarball to be created.

dman’s picture

@David_Rothstein Yeah maybe I skimmed it too lightly.
I saw

sandbox ... promoted project will have development snapshots

without first noticing that there would be *only one* nominated sandbox eligible for this at a time. (and that this would apply only during vetting anyway) Hm.

Fabianx’s picture

Any warning stating that the project is not subject to security advisories should be placed on all projects that are not subject to security advisories, not just ones by non-vetted users. Otherwise we risk implying that all projects by vetted users are covered by the Security Team, when they actually all aren't.

Yes! That have been exactly my thoughts on this. +1

It can be very confusing when modules with >60k installs only have a beta5 release and are hence _not_ stable.

This should also discourage such project maintainer behavior.

And a prominent warning will definitely help.

cweagans’s picture

Personally, I think we should get rid of the non-vetted "tier" of users entirely. Everyone can create a sandbox. When they want to promote it, they have to pass the automated scan. When that happens, it can be promoted. Just open that up to everyone.

Basically, I propose removing the last two sections of the issue summary and calling it a day.

I'd also propose adding a "Help me improve my module" button somewhere, that basically opens an issue asking for a manual review of the module by a project review team. It'd be cool to shift from "I'm opening an issue so that I can maybe become a contributor someday" (current process) to "I'm opening an issue so that I can be a better contributor" (proposed process).

klausi’s picture

We detected 400 security issues in the project application process over the last 4 years, and that are only the confirmed and tagged security issues.

So if we let anyone promote projects we should also drop security support for contributed modules. Would avoid quite a lot of work for the security team :-) Or we could drop security support for modules with less than 100 installs or similar.

And we should automatically delete full projects that don't have anything committed to git after 3 days, so that unused namespace is freed up again and we avoid the namespace squatters.

colan’s picture

To summarize the changes for reviewers and git admins: We should not block applications based on module duplication, so we abandon the "collaboration over competition" rule. The rest stays the same.

The reason here is to speed up the approval process? It looks like we're solving one problem by making another worse, but maybe I'm missing some of the background.

If we could come up with a way to reduce the duplication, it would be fantastic. I'm quickly losing interest in writing module comparison pages, such as Comparison of Salesforce-Webform integration modules. That level of duplication is insane.

cweagans’s picture

@klausi, How difficult would it be to include security tests in the automated scanner? I acknowledge that that's an issue that needs to be solved before moving in the direction of opening project promotion up to anyone. I think it's worthwhile to consider, at least.

srjosh’s picture

One of the main issues I've had with the process is the "review bonus" process. It's logically flawed - if the drupal community (or the people in charge of safeguarding us from flawed code, God forbid) doesn't trust someone enough to allow them to promote a sandbox to full status, why on earth are they trusting their reviews of other people's module applications? It makes no sense at all.

What is being done to address this?

gisle’s picture

srjosh wrote:

One of the main issues I've had with the process is the "review bonus" process. It's logically flawed - if the drupal community (or the people in charge of safeguarding us from flawed code, God forbid) doesn't trust someone enough to allow them to promote a sandbox to full status, why on earth are they trusting their reviews of other people's module applications? It makes no sense at all.

That's not quite how the process works.

I suspect that the main reason behind the "review bonus" is to also make unskilled contributors conduct reviews. Doing a code walkthough and writing up a review of somebody else's project is for many be a greater educational experience that actually getting one's own code reviewed. (At least, that is how it worked for me.)

As for the promotion part of the process, that doesn't happen based upon peer review alone. When a project has had been scrutinized by multiple eyeballs (some quite skilled, and some less so) and has reached RTBC status, it is assigned to a git admin. The git admins are senior reviewers and have a good eye for security problems. More often than not, they find major problems and push the work back to "Needs review".

I think the current process is quite sound and provide an excellent learning experience for those who participate. Its only problem is that there are too few reviewers available (both junior and senior) to keep up with demand, making the review process an unhappy experience for many.

I therefore understand the need to look at this with fresh eyes to cut down on the long waiting times and the huge backlog that we currently experience. But there is nothing wrong in principle with how the current process works and how it allows both junior and senior reviewers to participate.

mpdonadio’s picture

I can talk about #19 a little; hopefully @klausi can provide updates. The current scanner1 (phpcs + DrupalSecure) can detect some things, like using $form_state['input'] instead of $form_state['values'] and not using placeholders in queries. It also has sniffs for XSS and CSRF, but I don't think I have even see it catch any.

I do know it missed a CSRF in an application which could result is a DOS and a hard server crash. This particular application hasn't been responded to in nearly a week.

1 The online version lags a bit behind the "bleeding edge" version that some of the admins use from the command line.

Fabianx’s picture

#16 I mean yes, the main problems with stable releases is that the module needs to be security team supported.

There is two things that might be problematic in opening the flood gates:

a) Flooding of modules that make sense and are used, but whose maintainers never ever read the guidelines for good code. If you need to go through the process once, you learn a lot at least. And having only one module published and no stable releases is still a good incentive to go through the process.

b) Security team vetting

For b) it might make sense to give 'security team supported' as a kind of badge, but then the approval process would apply to all new stable releases / modules instead and not sure that is desirable either ...

jthorson’s picture

@cweagans: I think that your suggestion would be a fantastic 'end goal' ... in this case, the proposal as presented is a case of 'walk before we run'. One of the considerations that went into this discussion is a desire to ensure that we don't "over-correct" with the changes that we're suggesting.

gabesullice’s picture

I'm very happy to see a proposal acknowledging this issue and making some worthy proposals to address it. For far too many people, I think the process is frustrating and off-putting. However, the process does provide some very valuable insight to new contributors and culls many duplicate and ill-conceived modules from appearing on Drupal.org. I think we need to tread lightly and not throw the baby out with the bath water, but this proposal is discussion is very worth pursuing.

Something I would like to see included is a badge/anti-badge system for contributed modules. For example, every full project might begin with an 'anti-badge', e.g., "This module has not passed a security review." One would then be encouraged to submit your module to a security queue to be reviewed and/or passersby could perform an audit and submit it for review themselves. One can imagine a number of these. After some time, perhaps modules like Views or Context might receive certain badges which signify that they're somehow "battle tested", as in they've been in use by 5000+ sites with ongoing development and security reviews. Finally, as for automated reviews, there could be a badge that rates the modules red, yellow, green based out some codesniffer result. That would provide some leeway for false positives and would not be overly strict. Needless, to say, there's lots of experimentation that we could do there.

Overall, the anti-badge idea is very appealing to me because it actively encourages maintainers to pursue secure and quality codebases while not giving an explicit endorsement of security to modules with a "Security Reviewed" tag of some kind.

The last thing that I would like to see if we open the doors is a means to mark modules as duplicates and/or provide alternatives that is not solely editable by the maintainer. Perhaps something like a 'Mark this module as a duplicate' link could be added and after a manual review or some level of corroboration the module would be given a demerit.

cweagans’s picture

Fair enough. Personally, I'd like to see it put in the plan as a long term goal, but I'm happy with the plan in the summary as an initial step.

colan’s picture

I really like where #25 is going. So, to summarize, we could have (anti-)badges like the following, where users other than the maintainers could edit (although maybe the last one could be automatic):

  • (not) security reviewed
  • (not) marked as a duplicate [of X]
  • is (not) used by at least Y sites
Matt V.’s picture

Something that I think could be clarified within the "text explaining the Security Team’s responsibility for development releases" is that the security team does not explicitly vet or review stable releases. The way I think of the security team is more that they shepherd the process of getting bugs fixed, once they are discovered. Some people have the misconception that all stable releases have gone through a security review.

I'm against the idea of a "badge" system for marking projects as having had security reviews. The fact is that even projects that get reviewed one day can have new issues introduced and/or discovered the next day. Automated scanning can help weed out bugs and make projects "more secure" but they won't be able to make them "secure." In the words of Bruce Schneier, "Security is a process, not a product."

attiks’s picture

All users, regardless of their Git-vetted status, will follow this process:

Initiate promotion of the project, which will mark the project as ready for an automated scan. Only the owner of the sandbox can initiate the promotion process.
If the project passes the automated scan, the project will be given full status.

I like the whole idea, but I have some concerns of automatic testing, as a test I ran it on the picture module and I got a lot of false positives:

  • There's a submodule named flexslider_picture, which get flagged as named badly, at least debatable
  • Advice "Do not call theme functions directly, use theme('flexslider_list', ...) instead" would break a site
  • Minified javascript files are checked: "ASCII text, with very long lines, with no line terminators"
  • Unknown type hint "PictureMapping" found for $picture_mapping
  • Third party files are checked as well
  • Feature files are not excluded (not related to this module)
kreynen’s picture

Views doesn't even return a clean "pass" of the current tests. http://pareview.sh/pareview/httpgitdrupalorgprojectviewsgit returns issues with function naming, but that's nothing compared to the errors http://pareview.sh/pareview/httpgitdrupalorgprojectlibrariesgit returns > 100 "errors".

I'm not really sure how this is going to work if the most popular Drupal modules are going to be flagged by this process.

kreynen’s picture

I suspect that the main reason behind the "review bonus" is to also make unskilled contributors conduct reviews.

A by-product of the "bonus reviews" has been "unskilled contributors" can be part of a very frustrating initial experience for someone with programming experience and experience contributing to other projects interested in contributing to Drupal. These developers don't understand why these "unskilled contributors" looking for a bonus are providing feedback. By the time someone more skilled even looks at the issue, it can be too late. The developer has moved on thinking "if that's the type of person responsible for code on Drupal.org, is this really something I want to be involved in".

Clearly labelling the reviews by non-vetted, new contributors looking for a bonus would help, but it looks like they are being removed altogether.

Is that correct?

Despite the false positives, I generally like where this is headed. Anything that removes barriers to contributing and and increases transparency around where a project is at in the process is progress. I really like the process StackExchange uses to track the steps requested to start a new exchange like http://area51.stackexchange.com/proposals/61762/gamification

Under the QA site proposal's "score" you can find the details about what led to that score...

The commitment score is the minimum of three scores:
84% 169/200 committers in total
100% 119/100 committers with 200+ rep on any other site
100% commitment score, based on committers' activity on all other sites and how old the commitment is

klausi’s picture

@attiks:
* submodules should be prefixed with the main project name, so the automated review is correct. It should be "picture_flexslider", not "flexslider_picture" to avoid name clashes with other projects.
* Yes, minified files are flagged as false positives and we can probably remove them from pareview.sh now since PHPCS should detect all newline/character encoding errors.
* Third party files are forbidden on drupal.org in general, so they should be removed, then you don't get errors ;-)
* Features module should be improved to output coding-standard compliant PHP files.

@kreynen: there are no plans to remove the review bonus system, since I expect that the project application queue will still be long and we want to prioritize it.

@cweagans: automated security tests are hard and we can check some trivial things. The problem is that if we have too many false positives people change their code without thinking just to make the reported "errors" go away, which leads to nonsense like

$message = check_plain('Hello');
drupal_set_message($message);

Just because the old Coder Review flagged $message as "potential security problem" here. That's also the reason why we do not run the old Coder Review in pareview.sh anymore.

attiks’s picture

#32 Thanks for the feedback and for the tool

  • picture_flexslider the idea was to merge it with flexslider module, but it never happened, I'm not going to rename it know
  • third party files aren't forbidden AFAIK, but we added it because we needed to patch it and to create the minified version, and yes in an ideal world this would be fixed upstream first.
  • features would be fun to do, keeping in mind that it could contain exported nodes and base64 encoded files, we gave up on checking features files.
  • I added 2 more to my previous comment, but we cross posted
    • Advice "Do not call theme functions directly, use theme('flexslider_list', ...) instead" would break a site
    • Unknown type hint "PictureMapping" found for $picture_mapping
kreynen’s picture

You'd never know it by looking at git, but 3rd party files have always been forbidden without and an "official" exception. Unfortunately it's never been entirely clear how someone would actually request or get an exception. It has taken years, but we finally have a Licensing Working Group queue to request exceptions. We haven't approved any requests for exceptions yet because we are still working out the process and what (if anything) should change in the Git Usage Policy. The situation with 3rd party has really gotten out of hand. The Drupal.org repo is littered with Apache2 and GPLv3 code. We've started cleaning that up, but I think you can expect to see more code with lax licenses like MIT and BSD committed directly to modules and themes if for no other reason than to make the modules easier to install.

attiks’s picture

#34 So MIT licensed files aren't allowed either? https://www.drupal.org/node/422996 once said something else.

kreynen’s picture

Correct. No 3rd party code... not even MIT. The current policies only allow any 3rd party code even if it licensed as GPLv2 or later or a lax license that allows re-licensing for a few reasons... and even then, only with an "official" exception.

https://www.drupal.org/git-repository-usage-policy

DO NOT include code from a non-Drupal project in the repository. If your module requires non-Drupal code, such as a third-party JavaScript/PHP library, provide a link to where the other code can be downloaded and instructions on how to install it.

https://www.drupal.org/node/422996

Exceptions can be made if the 3rd party library:

had to be modified to work with Drupal, and the modifications were not accepted by the original author;
is generally difficult to find, or the specific version needed is hard to find;
is no longer maintained by the original author.

I've been involved in several attempts to change this going as far back at 2007. Jen Lampton made another push in 2012 in #1856762: Revisit and Redefine Drupal's Policy on hosting 3rd party files and/or files that is not strictly GPL.

The worst part about this policy is how inconstantly it was enforced. JQuery Update has always included multiple versions of JQuery and then added JQuery UI on top of that. Flowplayer has included GPLv3 licensed code since D5. Because these modules were popular, no one wanted to force the issue. Other, less popular modules or developers were not so luck and had their projects unpublished and their CVS/Git access revoked.

If no one has reported the 3rd party code as a violation, you don't need to worry. Even if someone were to report it today, the policy will hopefully be changed to allow MIT code in projects before we'd get to the violation. You could also go ahead and request an exception now.

The only reason to bring this up is if the policy changes, there will be more projects getting false positives from 3rd party code vs. less.

jthorson’s picture

Let's try and avoid getting pulled down into specific details of one particular implementation or another ... and keep this thread about the general process as being proposed.

Given some of the comments in this thread, I think it's important to point out that the proposal is that all modules will need to run against some standard drupal.org code review tools. I read this as 'to be defined', while a number of comments in this thread are talking directly about the existing projectapplications review script. As with anything officially deployed on drupal.org, the code review scripts will be discussed, defined, and refined ... pareview may be a good foundation on which to build the code review scripts; but it's a bit early to presume that Drupal.org code review = pareview.sh in it's current form. So I don't see debates about any particular rule within the ruleset adding value at this point in time (or at least, in this particular thread).

With respect to the concerns about false positives, the process would of course require an 'override' capability ... Whether this is 'git admins', 'webmasters', or a new 'project admins' role; I anticipate this would look very similar to a dozen other processes on Drupal.org ... problem with a false positive? Open an issue in issue queue 'X', and role 'Y' will verify the false positive and manually check the 'passed' box.

attiks’s picture

#37 It wasn't my intention to start bike shedding, but since it was part of the proposal and since it would affect all current module maintainers, I think it is important to find the right balance, introducing a queue for false positives will probably face the same problems as the current application review queue, it will leave room for lots of discussions and will require a lot of time to manage.

If we add this automatic review I think we should add it all the time, so whenever you want to create a stable release for a module it should pass the manual review, not only when you create a module since lots of modules get created using an info file and an almost blank module file. Keep in mind that this will change the process, since it will be Drupal that has to add the tag once a stable release can be created.

Besides that, I'm all for this proposal

dman’s picture

I don't mind the idea that every 'stable' release should be eligible for at least the same automated review that a first release has *but* there is no way that could be a manual third-party review as @attiks suggests.
Technically it's tricky also, as currently making a stable release means tagging the current state of the code, and if the review is triggered by that action, then it would trigger re-edits, and therefore re-tagging or shifting a tag. Not gonna be nice.

What will be nice is a continuous background automated review for all projects, that at least will be able to keep up a reasonable level of (one facet of) 'quality' for users to evaluate, and maintainers to aim for. It should be informational only, and though it may provide a bade or indicator, not be an impediment to any of the existing publishing processes.

attiks’s picture

#39 The process could be changed as follows:

  1. Create new release
  2. Enter desired tag
  3. Bot checks if tag exists, if it does error. This to avoid that devs will push a tag to force a release
  4. Push code
  5. Automated review
  6. If it passes code is tagged and release created

PS: If this changes, this will also change for Drupal core?

jhodgdon’s picture

Regarding #39/#40, the problem with forcing each release to pass automated review is the false positive error reports. A project that suffers from these, and has been deemed to pass via human bypass, would have problems on every release.

The more sane alternative would be to have this automated testing available for people to look at (especially the project maintainer), but not block any project from having releases based on its output. Or possibly to have some way of annotating the code "Ignore the error here" so that these tests could pass.

davidhernandez’s picture

Some of what is being discussed is already in the proposal.

All projects, new and existing, will be scanned regularly by Drupal.org code review tools. The results of these scans will be made available to project maintainers and will be used to display pertinent information on project pages. The information may include security warnings, error counts, or other information related to project health. This scan will also be used on all projects before they are promoted to full status.

We hadn't discussed making the scan a blocker for a release, but do intend to scan all projects on a continuous basis. Making it a gate would require a lot of discussion and research, which can come later. For now I would not include it in the initial changes as it would likely be very disruptive, and we need a lot more experience scanning.

kreynen’s picture

@jhodgdon Since the proposal starts by describing the current review automation as a big improvement and doesn't mention rewriting it, I'm not sure why anyone would assume that.

On the one hand I feel like forcing everyone to experience something similar to what new contributors go through now would make more people aware of the issues with the current process and could lead to more people contributing to improvements. On the other, adding more hoops could result in even more projects moving to GitHub.

David_Rothstein’s picture

For anyone confused by #43, replace "@jhodgdon" by "@jthorson" and it will make more sense :)

Plazik’s picture

All projects, new and existing, will be scanned regularly by Drupal.org code review tools.

This is very good idea since many popular modules doesn't follow Drupal code standards, for example http://pareview.sh/pareview/httpgitdrupalorgprojectctoolsgit-7x-1x.

David_Rothstein’s picture

@davidhernandez, I think what people are reacting to is this part of the proposal:

All users, regardless of their Git-vetted status, will follow this process:

  1. Create a sandbox project.
  2. Initiate promotion of the project, which will mark the project as ready for an automated scan. Only the owner of the sandbox can initiate the promotion process.
  3. If the project passes the automated scan, the project will be given full status.
  4. Choose a namespace for the project.

Especially given the likelihood of false positives in the scan, I don't see what benefit this would really provide. And as #38 points out, it's really easy to get around this if you want to (just promote your project first, and add the real code later). Some people do this intentionally now anyway, because they want to get their project public and on the module list a bit before they're ready to add the code.

davidhernandez’s picture

The scan does/will discover legitimate problems and help people create better modules. Will it be perfect? No, of course not. I don't think anyone has delusions about that. Regardless, we have no choice but to move towards a much more automated system. The influx, and potential increase, is just too much. What we'll need to do is sort out the false positives as best we can when they are discovered, and evolve the system.

But, to agree with jthorson, the focus should be on the policies. We will work out the implementation later, and make adjustments if needed.

David_Rothstein’s picture

Sure, but why block projects from initial promotion to full status based on the automated scan? Why not just display the scan results next to the project?

davidhernandez’s picture

We want the scan to definitely block certain things, like empty modules, spam, potentially malicious code. I agree we don't need it to block on everything, but have not fully discussed what should or should not block a promotion. We will definitely discuss that during implementation. I imagine we'll start with a fairly relaxed ruleset since this is all new.

webchick’s picture

I think what folks are getting at is that as a vetted user, I can create a module with a .info and a single function called "hello" with an empty function body and pass the scan. Then get full project access, then add whatever horrible security hole-laden code that I have on my local.

So basically, either we shouldn't block full project creation on the automated scans, or we should move that blocking to stable release creation, but then see concerns by attiks or others.

gabesullice’s picture

@Matt V. in #28

I'm against the idea of a "badge" system for marking projects as having had security reviews.

Totally agree with you, that's why I was proposing the 'anti-badge'.

From #25: "Overall, the anti-badge idea is very appealing to me because it actively encourages maintainers to pursue secure and quality codebases while not giving an explicit endorsement of security to modules with a "Security Reviewed" tag of some kind."

randallknutson’s picture

I'm all in favor of reducing barriers to entry for new developers but I don't think the original proposal goes far enough. I've been involved with other communities and they set their bar REALLY low for contributing and having anything much above that is going to demotivate people to contribute to our community.

Process to contribute on node (npm):
https://docs.npmjs.com/misc/developers
1. Create package
2. Upload to git
3. Create npm user account
4. Publish the package

Process to contribute to PHP (composer):
https://packagist.org/
1. Create package
2. Upload to git
3. Create packagist account
4. Submit the package

Process to contribute to Ruby (gem):
http://guides.rubygems.org/publishing/
1. Create gem
2. Upload to git
3. Create rubygems.org account
4. Publish the gem

We as a community should have similar levels of entry for contributors. I'd love to see drupal's process be:
1. Create a module
2. Create a drupal.org account
3. Upload to drupal git
4. Publish the module*

*As part of the publish step we would run validations and tests before the release passes.

We should do away with the idea of "vetted" users and "promoted" projects. If ctools, views, etc don't pass the validation we should either fix the validation or fix the modules.

klausi’s picture

@randallknutson: could you also research if any of those communities have a security team that publishes advisories for all of their modules?

gabesullice’s picture

If ctools, views, etc don't pass the validation we should either fix the validation or fix the modules.

Well put. I'm sure it's easier said than done though.

davidhernandez’s picture

I can create a module with a .info and a single function called "hello" with an empty function body and pass the scan.

I see the larger point, but the scan should block this.

webchick’s picture

Well the scan can only be so smart. If it filters out functions with an empty body, then I'll simply enter "return 'hello';" in it and re-pass the scan. Or initially upload someone else's module that passes all scans. Or whatever. The point being, the gate feels like it's in the wrong spot (for vetted users). So either we should move the gate or remove it.

webchick’s picture

Regarding this:

@randallknutson: could you also research if any of those communities have a security team that publishes advisories for all of their modules?

I have long advocated that the security team should not publish advisories for all modules, but instead only those that meet a particular threshold. (say, 1000 installs, and this would be clearly indicated on a project page) Volunteer effort is priceless and spending multiple hours by multiple very busy people on a module with 6 installs has always caused my soul pain. :(

This has not been particularly popular with the security team to date, though. I believe the concerns are around predictability for end users. OTOH, the fact that our contribution policies are so out of whack with other communities' ultimately hurts users too, because the developers they depend upon choose to participate elsewhere, not here, leading to fragmentation, burnout, etc.

However, per jthorson, I don't believe it's currently on the table to debate this existing policy (unless it is ;)).

davidhernandez’s picture

Wouldn't the same concerns about sidestepping the scan apply for a release? It seems a bit like kicking the can down the road. And if we end up scanning promotion by non-vetted users, but not by vetted users, we are back to strengthening the two tiered system.

We have not codified what would or would not block promotion, but that will get worked out at some point. If the scan is good at blocking spam, but has too many false positives for everything else, maybe that's all it blocks and everything else is just a warning. I don't want to guess at what we'll be capable of right now.

webchick’s picture

Ok, now commenting on the proposal itself:

  • Huge +1 to reducing the burden on humans and increasing the burden on automation, generally-speaking.
  • Huge +1 to making the security team's involvement (if any) in a given project abundantly clear.
  • +1 to enabling automated scanning all projects. Even long-standing projects would benefit from notifications if they're violating small rules (if nothing else, these make good novice contributor tasks).
  • +1 to the use of badges/anti-badges on project pages for communicating the level of security team involvement, as well as the results of the scan. Not only does this present some additional criteria for a user to make smart decisions on what modules to download, it opens the door for us to eventually "Filter on output, not on input" like all of the other communities enumerated in #52 (I do understand that folks feel we're not ready for that yet, and the intent is for this to be a baby step, that's fine.) We do need to be careful in the design of these though, since we definitely don't want to shy people away from a module like Views just because Earl prefers tabs instead of spaces (or whatever the problem is). OTOH we definitely do want to shy inexperienced users away from a module that has zero security team coverage.
  • +1 for reducing the strictness of manual reviews, and focusing on the big things only. This is a much better use of volunteer time, and also ensures that the first interaction people have with Drupal.org contributors is a valuable one. I actually disagree with klausi that this means abandoning "collaboration over competition," it just means not blocking code from being posted because of it (in other words, it's the very definition of the phrase :)). This is something Dries has long advocated for us doing, as have several others.
  • I agree with concerns that have been raised about putting warnings on people. That feels very problematic to me, and indeed makes the "two-tier" problem worse. We should be able to communicate our general lack of confidence with the badges/anti-badges on the project itself.
  • I'm a bit nervous about enabling dev releases on sandboxes, but OTOH it's possible even now to get a dev release of a sandbox, it's just convoluted and involves navigating your way around cgit. And with the badges/anti-badges on the project page, that should help warn people before they do this that they're doing so at their own risk. I think the rationale for making this change is sound.
  • As mentioned in a couple comments above, I think placing the check at the creation of full projects for unvetted users makes sense, but don't see any point in doing so for vetted users. Maybe I'm missing a cluestick though... I'm just not sure what problem we're trying to solve here beyond "hey, look, vetted users have to jump through hoops too so it's not quite a two-tiered system, just a 1.5 tier system" but that hardly seems like a good reason to introduce barriers. :) Not like I really feel strongly enough to want to block this change, but could more rationale be provided for this: "This scan will also be used on all projects before they are promoted to full status"? I guess to prevent namespace squatting?
  • Like others, I'm a bit confused on the arbitrary limit of one full project. Is this also to prevent namespace squatting?

TL:DR: There's a lot to like here. I feel pretty strongly about the user labeling issue. I also feel somewhat strongly about the gates at the full projects for vetted users, at least without additional rationale. I'm mostly ambivalent about the 1 full project limitation for non-vetted users, other than it's a confusing and seemingly arbitrary restriction that we're going to have to document and explain to people. If the goal on both of those is indeed to prevent namespace squatting, I wonder if we could address that in a different way that doesn't involve annoying humans. For example, could we not just initiate a process that auto-demotes full projects after a week with no code or similar?

webchick’s picture

Wouldn't the same concerns about sidestepping the scan apply for a release? It seems a bit like kicking the can down the road. And if we end up scanning promotion by non-vetted users, but not by vetted users, we are back to strengthening the two tiered system.

Well, no. Because the stable release is your module, what's going to show up on the project page, the thing users will download, what the security team will be responsible for, etc. Versus commit #1 of a given project could be absolutely anything, and also could bear absolutely no similarity whatsoever to that first stable release (and in fact, often won't).

I don't deny it retains a two-tier system, but it makes more logical sense, and the proposal does nothing to change the two-tier system anyway. It still distinguishes "vetted" from "non-vetted" users, it still forces non-vetted users to go through a manual approval process, and it even talks about putting a warning badge about the fact that project owners are non-vetted users on their projects (which is worse than the status quo).

davidhernandez’s picture

Versus commit #1 of a given project could be absolutely anything

If people are in the habit of adding one commit with nothing in it and then promoting the project, why do we want to continue that behavior? We won't expect a new project to be in release state (although that could be another discussion) but it should contain something properly functional.

and it even talks about putting a warning badge about the fact that project owners are non-vetted users on their projects (which is worse than the status quo).

I think that depends on a point of view. The status quo now is no project access at all. So which is seen as worse?

DamienMcKenna’s picture

webchick’s picture

Actually, I *frequently* start with zero code or barely any code when I make new projects. Most people do. (And not because we're trying to game any particular system.) That's just a best practice. It's one of the strengths of version control, that you can do very granular commits and see a developer's thought process all the way through. If a project starts with a fully-baked module, there's no way for me to understand what the process was that developer went through to get there, what bugs they found along the way, etc. We should encourage "commit early, commit often" more so than "commit already done." We're losing valuable project history that way.

This is why if we're going to keep a gate, IMO we should probably put it around "first stable release." For both kinds of users. The difference between vetted and unvetted is they also need a manual review in addition to passing the automated scan.

davidhernandez’s picture

I'm not talking about starting the Git repo with zero code. I'm talking about promotion. You can do all of that while it is a sandbox. Though I'm waning on the argument only because I see now the namespace would be an issue. :/ You wouldn't want to have to rename things if the name is taken in the mean time.

webchick’s picture

Right. You want the namespace as soon as possible, hence the loosening of the full project restriction on non-vetted users. This is why I'd advocate solving that problem in a different way. (Wouldn't have to be here, can be later in that spin-off issue.)

jthorson’s picture

#48:

Sure, but why block projects from initial promotion to full status based on the automated scan? Why not just display the scan results next to the project?

There is a balancing act between unblocking the contribution path, and maintaining the (accurate or not) public perception regarding high code quality of modules published on Drupal.org. I'd suggest that enforcing this scan will contribute to an overall code quality improvement (or, more realistically, consistency improvement) within published contrib projects on Drupal.org; but just providing the results and expecting people to review them devalues the effort put into implementing this in the first place ... my own opinion is that they will be largely ignored.

That said, I don't see it being efficient to scan at every release either, given the 'false positive' override concern already expressed in this thread ... I would advocate for a 'first release' or 'major release' scan requirement, while subsequent minor releases would be grandfathered in to the initial scan results (or still be scanned, but not as a gate to release creation).

Also, it would be impossible to suddenly attempt to apply a set of automated scan tools to the tens of thousands of existing contrib projects already on Drupal.org, and expect them to comply to the new ruleset. For this reason too, I would advocate that it is only on the creation of a new major release that the scan would become a gate; and existing projects would not be expected to conform to the automated scan rulesets for incremental releases on their existing major release streams. (i.e. If a project is at 7.x-2.2 at time of implementation, no future 7.x-2.x release would be subject to the scan as a 'gate', but perhaps a new 7.x-3.x branch could be?)

#50/55/56:
People can and will game any system we come up with. But lets not evaluate the merits of the system based on the exceptions that we're going to run into ... if people are implementing these types of workarounds to avoid the policies established by the community, then I'd say we would address that the same way we address (as an example) people who directly contravene our licensing policies on drupal.org ... i.e. handled on an exception basis; as opposed to expecting the system to anticipate and prevent any workarounds. The more we try to close the workaround holes, the more inflexible the system becomes ... and I think this inflexibility is something that flies contrary to our guiding principles as a community.

gabesullice’s picture

I'd like to somewhat summarize where I think the discussion is up to this point so that we might amend the proposal and center the discussion. Please note, these comments do NOT address whether we should have 'vetted/non-vetted' statuses at all. Nor does it address the name-squatting spin-off in #62.

Proposal

All Projects

All projects, new and existing, will be scanned regularly by Drupal.org code review tools. The results of these scans will be made available to project maintainers and will be used to display pertinent information on project pages. The information may include security warnings, error counts, or other information related to project health. This scan will also be used on all projects before they are promoted to full status.

It seems we all agree on this. Discussion is centered around implementation and details on what the 'warnings' might appear as.

Projects pages will contain text explaining the Security Team’s responsibility for development releases. This text will be part of the downloads section. Drupal’s Update Status report will also display a warning when running the development version of a project.

I haven't seen much discussion of this in particular, but the sentiment seems to be clear that we need to make the security team's involvement more clear.

Project Promotion

All users, regardless of their Git-vetted status, will follow this process:

  1. Create a sandbox project. - NO CHANGE
  2. Initiate promotion of the project, which will mark the project as ready for an automated scan. Only the owner of the sandbox can initiate the promotion process. - NO OBJECTIONS
  3. If the project passes the automated scan, the project will be given full status. - UNDER DISCUSSION
  4. Choose a namespace for the project. - SPUN OFF

From what I can glean, I think the above flow is not under objection for 'vetted' users. The full-status after automated scan is only under scrutiny for 'non-vetted' users. Perhaps we can remove the 'regardless' clause and specify that it's for 'vetted' status.

Non-vetted Users

For non-vetted users, the following restrictions apply:

  • Only one sandbox project can be promoted to full status. - UNDER DISCUSSION AS ARBITRARY
  • The promoted project will have development snapshots, but cannot have tagged releases. (beta, rc, 1.x, etc.) - NO OBJECTIONS TO TAGGED RELEASES, ONLY DEV SNAPSHOTS
  • The project page will have a visible indicator that the project is maintained by a non-vetted user, including text that the project is not subject to Security Advisories. - OBJECTIONS

I propose we break it up like so:

For non-vetted users, the following restrictions apply:

  • Only one sandbox project can be promoted to full status.
  • The promoted project will have development snapshots
  • Modules may not have have tagged releases. (beta, rc, 1.x, etc.)
  • The project page will have a visible indicator that the project is maintained by a non-vetted user.
  • Text that the project is not subject to Security Advisories will be placed on the module page.

By breaking it up like this, I think the last point becomes moot as all modules ought to have this text. This leaves only the two points under contention, dev releases and user-centric indicators on module pages.

Requesting Vetted Status

When a user completes the project application review process, the restrictions above are removed. The process is as follows:

  1. Submit a project application request in the Project Application queue. The project submitted for review should have already been promoted to full status, and passed Drupal.org’s automated scan.
  2. Reviewers will manually review the project, checking only for problems related to licensing, security, or major problems with Drupal api use. Applications will not be blocked for minor coding standards violations, bugs, or project duplication.

It seems that, given the scope of the proposal, there are no objections to this. That said, there is a large discussion around vetted/non-vetted status altogether. Everything above is subject to the assumption that we're not changing that rule, but amending the current process to be more amenable, not removing the process altogether.

davidhernandez’s picture

I just want to make sure we are clear that adding a gate to releases would be more restrictive than what is proposed. The scanning of all projects is informational, to try to determine the health of the module. The only gate is on promotion.

webchick’s picture

That said, I don't see it being efficient to scan at every release either, given the 'false positive' override concern already expressed in this thread ... I would advocate for a 'first release' or 'major release' scan requirement, while subsequent minor releases would be grandfathered in to the initial scan results (or still be scanned, but not as a gate to release creation).

Yes! Exactly. I think we cross-posted, but that's exactly what I was trying to get at. Move the gate from full project creation to *first* (major) release creation. That's the point at which it starts impacting the security team and users (at least the ones who heed dire warnings ;)) directly. My concern with the "full project promotion" gate for everyone is less around the gaming (I agree we shouldn't hurt everyone because a few people are jerks) and more about it just simply being an illogical place to put the gate, since at that point you often don't have anything approaching a complete module because you're simply reserving the namespace.

@gabesullice: Awesome synthesis!

webchick’s picture

Now. That said, if we choose to go with that idea, there is an obvious downside, which is at least some developers will deliberately never make stable releases because then they never have to deal with additional restrictions. :P That has ecosystem impacts and detriments for users, too. OTOH I think we could balance that out with "carrots" like lovely badges, "featured" listings, ratings/reviews, whatever. (This is total spitballing and totally off-topic from this proposal, so please don't fixate on those suggestions. :))

The advantage though is that instead of annoying close to 100% of module developers, you're now only annoying like 20% (I can query for exact data later, if desired).

mpdonadio’s picture

The advantage though is that instead of annoying close to 100% of module developers, you're now only annoying like 20% (I can query for exact data later, if desired).

We annoy 100% of core developers over coding standards; nobody gets a free pass. Ignoring issues related to ctools/views plugin naming for the moment, why should contrib be different?

Tests against core and RTBC patches get run routinely. Why should contrib be different? Having a post-update hook trigger the scan would be pretty equivalent.

webchick’s picture

Well, I would solve that by making coding standards violations a "notice" rather than "warning" level problem, but it's hard because we've been asked repeatedly by the authors of this process not to delve into details of the tools and what they would do.

jthorson’s picture

#72: Sorry ... I didn't intend to give that impression ... only a recognition that this thread has the potential to spin into a thousand bikesheds if we start to discuss implementation details; and an attempt to encourage a concise discussion in the 'policy' realm before adding in the more contentious design details. :)

DamienMcKenna’s picture

@jthorson, @webchick: Then there are the (rare) maintainers who say "I don't have to follow the coding standards and will reject patches to fix coding standard violations"; I suspect this topic is in need of a separate discussion.

Plazik’s picture

I can create a module with a .info and a single function called "hello" with an empty function body and pass the scan. Then get full project access, then add whatever horrible security hole-laden code that I have on my local.

We can add minimun number of code lines without comments, for example 100.

gisle’s picture

@plazik wrote:

We can add minimum number of code lines without comments, for example 100.

I believe that #56 already explains why that won't work either.

The bottom line is that the proposed automatic scans can be easily circumvented by anyone who wants to game the system. So we should not use these tools in a fashion that give developers an incentive to game the system.

I think automatic scanning tools are fine, but they should not make decisions (about promotions or other things). Instead their use should be to assist the developer and the security team. Also, the result of automated scans should not be made visible to ordinary users of the site by means of an "anti-badge" or other methods.

As for the "anti-badge" proposal: These scans are probably going to generate a lot of false positives, and having a anti-badge attached to one's project because the automated scanner got it wrong will probably be as bad an experience for a budding developer as not being allowed to create an installable release. It may also unjustly tar the developer's reputation. The logistics of dealing with anti-badges the developer thinks unfair will consume a lot of resources to handle. And if not handled in a timely fashion, that too will create a bad user experience.

We already have a way of dealing with unfixed projects that humans have found to be insecure. They have their releases pulled and are marked "Unsupported". I believe that this is also how insecure projects should be dealt with in the future.

However, I am in favour of an more open and inviting process than the current, but instead of using these automatic tools to prevent promotion or to "anti-badge" the project, we need to think about how to provide better support for budding developers, so that the developer is incentived to fix his insecure and poorly coded sandbox before promoting it to an installable project.

My proposal is that all users, regardless of their Git-vetted status, will follow this process:

  1. The owner initiates promotion of the sandbox, which will mark the project as ready for an automated scan (for coding standards and security). Only the owner of the sandbox can initiate the promotion process.
  2. The owner is shown the output of the the automated scan. If the project fails the scan, the owner is advised that there may be problems, and that if the owner doesn't understand the output of the scan, voluntary code review (see below) may be an option. The owner is also told that if he goes ahead and promotes the project without a clean scan, it will be flagged with the security team, but that flagging does not guarantee that the security team inspects the project.
  3. If the owner still wants to go ahead and promote the project, he is required to tick a form (tickbox + comment field) with this declaration: "I am aware that this project failed automatic scanning. However, I declare in good faith that I am convinced that all the problems reported are false positives (see my comments below for an explanation of this conviction)." This form is only visible to the owner and the security team.
  4. If the project passes the automatic scan, or it fails, but the owner declares that he is in good faith about it being secure, he is allowed to go ahead and promote the project.
  5. In case of a project failing automatic scans, the owner is offered to put the sandbox up for voluntary code review. This is basically the same process as the review bonus program that is run by klausi today, based upon a combination of automated scans, human review and some helpful education of the developer.
  6. After passing voluntary code review, the developer may have learnt how to fix any real problems, and how to identify the false positives in the automated tools. Then he can go ahead a promote the project is good faith.

This scheme has the following advantages compared to what is proposed:

  • It removes the two-tiers of developers, since all developers go through the same process.
  • The voluntary code review gives all developers who wants this it the benefit of having their code reviewed by others.
  • By forcing the developer to review the output of the automated tools before deciding what to do next, the developer is incentived to fix the problem himself or to seek help (through voluntary code review) if unable to do so.
  • The comments attached to the declaration the user must sign before being allowed to promote a project that fails automated review may help the security team in profiling the developer as experienced or not.

I think we should get rid of the two tier system entirely. I even suspect that some veteran developers that was grandfathered into git vetted status should benefit a refresher course in security and coding standards by going through a voluntary code review.

However, if the community wants to keep the two tier system, only users that are not git-vetted should be subjected to the above procedure.

attiks’s picture

#76 This sounds like a very good and sane proposal, I like the idea of removing the two-tiered system completely.

DamienMcKenna’s picture

@plazik: A minimum number of lines wouldn't work because it would rule out something like this module: https://www.drupal.org/project/remysharp_html5shim

Plazik’s picture

@DamienMcKenna there is already limit of code lines for new projects as I know.

gisle’s picture

@Plazik,
there are no limits for code lines in project applications (nor should there be, we do not want to encourage bloat).

If an application has too few lines of code to allow reviewers to evaluate the author's experience level, it is still promoted to a full project (by a git admin) when the review is completed.

gisle’s picture

Another problem with the anti-badge system proposed in #25 is that when users become aware of the anti-badge like "This module has not passed a security review.", they may start to believe that any project without this badge has had its most recent release reviewed and therefore is secure.

But the security review that prompted the removal of the badge may have happened many years, releases and even maintainers ago. Unless this anti-badge in automatically reintroduced on the project for each new release until that release has undergone security review, the absence of this anti-badge may be misleading.

fuzzy76’s picture

I believe #76 would result in a lot of people ignoring the scans, publishing crap and flooding the security team with flagged projects. If you start poking around d.o you will find a lot of projects that are installed on thousands of sites without any stable release, and a lot of just plain bad code. We need something that actually encourages devs to get their act together (or atleast prevent people from using these modules), and I think public scan results and anti-badges are a great tool for just that! False positives should either be fixed in the scanner or reclassified to notices so they won't cause any red flags on the project page.

As a previous eager participant in the discussions on these policies, I think this proposal (as summarized in #67) is a great step forward! Ofcourse everyone has ideas on where to go next, but I think it is important to not do too much at once. As in other aspects of development we should steer for the Minimum Viable Product first and correct as we go forward. :)

randallknutson’s picture

Getting rid of the two tiered user system would be a very good thing.

Have any of you with "vetted" status tried to go through the application process in the last few years? It is absolutely awful. I must admit I went through before the current process is in place but my experience was one of the worst experiences I've been through as a developer and I've talked with many others with similarly bad experiences under the current system. Many times the process takes months and the feedback for fixes is frequently trivial stuff like missing spaces. Then you get to the very end and are denied for some subjective reason like "too close to an existing project" or "too trivial of a project." Then you have to start all over and hope after the frustrating review process your new project will pass the last hurdle.

This process could very easily have killed some of our most popular modules out there when they were first starting. For example, cck could have been rejected because it was too close to flexinode. Who knows how many great modules we've missed out on because of our policies, particularly the "don't duplicate effort" mantras.

We need to get rid of the idea that we only want "good" developers to contribute and that we can determine who those are through a review process. This is the message we are sending to developers. Other communities out there are thriving right now with wide open access. We are going to keep driving away new developers until we change this.

The open source community, spearheaded by github, is a place that encourages people to contribute and encourages forking of code. We should adopt these ideas as well. Currently we do almost the exact opposite. We make it very hard for users to truly contribute and almost every new project is greeted with an issue of "Isn't this a duplicate of X module?"

So, we should get rid of the two tiered system and use scanning/badging to encourage good code and community standards.

kreynen’s picture

It is absolutely awful. I must admit I went through before the current process is in place but my experience was one of the worst experiences I've been through as a developer and I've talked with many others with similarly bad experiences under the current system.

My experience with the current process has been very similar. I've worked with professional developers at commercial services wanting to offer a Drupal module and contributors to other open source projects. None of them can believe that the Drupal project gets any new contributors after they've experienced what we put new users through.

That said, I find this statement really scary...

We need something that actually encourages devs to get their act together (or atleast prevent people from using these modules)

I'm all for eliminating the current 2 tiered system, but I don't want to see it replaced with another type of tiered system that strongly discourages anyone from ever trying modules that haven't been fully vetted. If we make it too hard to find these project or scare users away from even trying them, we are going to deprive new projects from the users they'd need to get feedback and improve. We are also going to discourage sharing eraly and often.

Why would any new project host their work on Drupal.org if it would actually be easier for people to find and try their project if it isn't hidden from searches and their project page isn't full of warnings telling potential users not to use it?

There needs to be some balance between the needs of the Cathedral and the innovations that come from the activity at the Bazaar. The recent changes made to Drupal.org have been a great balance between structure and flexibility. I'll try to remain optimistic that these ideas will be implemented in a similar fashion.

I am interested in how you see distributions, dependencies, and drush being impacted by this new plan.

If a distribution had a 1.0 release that contained dev snapshot and betas, would that really be considered stable and secure?

What about a module that required a beta version of another module as a dependency? Can that really be considered stable and secure?

Distributions now have a basic scan of the versions of core and projects included in their .make and display a security warning on the project page... but only on the project page.

It seems far too easy to get the code of simple module vetted that requires dependencies that include warnings. With the workflow a large number of developers follow, they would drush dl simple_module and see something like...

The following projects have unmet dependencies:
simple_module requires unvetted_code
Would you like to download them? (y/n):
David_Rothstein’s picture

I am interested in how you see distributions, dependencies, and drush being impacted by this new plan.

If a distribution had a 1.0 release that contained dev snapshot and betas, would that really be considered stable and secure?

What about a module that required a beta version of another module as a dependency? Can that really be considered stable and secure?

I think those kinds of gaps are intended to be addressed by the part of the proposal that suggests having the Update Status module in core warn about releases that aren't covered by security advisories? (At least as much as it can be addressed.) You might not see the warning while downloading the module, but at least you would see it when you enable the module on your site.

kreynen’s picture

Since all security warnings currently show up as basically the same "threat level", I don't think waiting until users install a project to alert them of a problem could really be considered taking security seriously.

Would you use a unix distribution that allowed you to apt-get install software for the primary repo before telling you that what you just installed has made your server vulnerable?

We still have 4 distributions users can download from Drupal.org that include the drupalgeddon exploit and no warning! 2 of these are tagged as "stable" 1.0 releases. Waiting until the user has installed these to warn them for the first time that their server is compromised is ridiculous.

While we are working to prevent the method these distributions are using to avoid being labeled as insecure, I really don't like the idea that we'll see more projects labeled at 1.0 or > and an "official stamp" that they are stable and secure... only to find out after I've installed it that it I now have security issues.

Seems like very confusing messaging.

webchick’s picture

That indeed seems like a valid concern, but not really sure how that relates to this proposal. I don't see anything here that makes that "pre-existing condition" particularly better or worse, other than we could add "don't check in full core / projects in distributions" to the automated scanning tools and "depends on unvetted code" as one of the "anti-badges" on a project page, if we choose to go down that route.

kreynen’s picture

Currently there is little perceived benefit by many users between a release labeled beta13 and 1.0 (as seen by Feeds ~90K installs).

Adding information to projects that says that a 1.0 release has received some additional vetting by automated tests, reviewers and Drupal.org security team (or that anything < than 1.0 hasn't) adds value to reaching 1.0. I agree that is a good thing.

What I don't like is how easy it would be to game this system, get the value of a 1.0, but still require code that puts sites and servers at risk and how confusing this is to most users.

What I'd like to see is scanning of the dependencies listed in the .info similar to how we scan distribution .makes. If your project has dependencies that aren't > 1.0, that is obvious on both the project page and through drush.

drush dl feeds_vimeo

The following projects have dependencies that haven't received security or coding standard reviews:
feeds_vimeo requires feeds
Would you still like to download them? (y/n):
balazswmann’s picture

I am very curious that what do you think about "private" projects? I mean about projects which can be accessed/cloned by only the author(s) and admins (or other people who has the appropriate permission)? The purpose would be that until the project reach a specified status (eg. it is ready to become a full project) it wouldn't be available for the public. If this feature or a similar one already implemented then sorry, I didn't know about it.

gisle’s picture

Re: #89

This would be bad. It should not be possible to host a private project on Drupal.org.

If you want privacy, you set up your own repo for your team (on your own server, or on GitHub). Then you move it to a sandbox on Drupal.org when you're ready to let others access your work for testing and evaluation purposes, before you promote it to a full project.

Drupal.org is a an open community. To stay that way, we should not introduce broad "features" that encourages the creation of closed sub-communities.

balazswmann’s picture

#90 I see, thanks for the answer.

gisle’s picture

The problem with automated tests opening and closing a gate, or removing anti-badges, is that is too easy to game such a system.

Let's say Clueless Joe's sandbox project fails the automated tests. Joe tries to fix it, but is unable to figure out how.

So what does Joe do? Well - if he is really clueless, he keeps on changing random stuff (or simply dikes the lines the robot don't like) until the project passes robot review. Voila! The gate opens (and/or all the pesky anti-badges disappear). Joe is now the proud owner of a shiny new full project hosted on Drupal.org. The project is of course unusable because whatever he's done to mute the robot has done nothing to improve its quality (it has probability made it worse). Joe knows this, he quickly reverts his project to the last version that actually seemed to work, and then he "updates" his project repo. This is the same version as the version the robot rejected.

I honestly don't think that a gate and some (anti-)badges managed by a robot will stop people who are determined on publishing crap from doing so.

mpdonadio’s picture

@gisle, you are the only name here that I recognize as have gone through the review process recently. What were your experiences with it? Do you think the proposal would have made it go better for you? Your situation was a little different, but I think others would be interested in what you though about it.

gisle’s picture

@mpdonadio wrote:

@gisle, you are the only name here that I recognize as have gone through the review process recently. What were your experiences with it?

The present review process gets some criticism in this thread (e.g. #83, #84). My thoughs about it are more positive. Yes, it was a rough ride, but it was also very much an educational experience. Prior to signing up for the review, I've read a few books about Drupal module development, and I know how to google. But having one's own code scrutinized and commented upon by knowledgeable developers did teach me things I had not been able to learn from books, etc. And, yes, there were some clueless junior reviewers along the way insisting on changes that made no sense (such as wrapping string inside hook_menu() in t()), but they were quickly corrected by more senior reviewers).

I also reviewed other applicants' projects. Initially because having the "review bonus" was clearly a requirement if you wanted your own project reviewed in a timely fashion. But later on I realized that I learnt just as much from providing feedback about other people's code, as I learnt from receiving feedback on my own.

To sum up: I think the current process is quite sound and provide an excellent, but perhaps somewhat rough, learning experience for those who participate. Its only problem is that there are too few reviewers available (both junior and senior) to keep up with demand, making the review process drawn out and therefore an unhappy experience for many who want to see their project's quickly deployed on Drupal.org

The present system is actually not that bad if you make it your business to participate in peer review (as opposed to sitting still and expect the reviewers to fix your bugs for you). Its main problem is that it is seriously understaffed and unable to keep up with demand. If we are going to replace it (and this issue tracker indicates we are), I believe that deploying (anti-)"badges" and robot-operated "gates" will lead to an awful user experience for honest players, and a flood of badly coded and insecure cruftware from those who are less honest.

If we are going to create a friendlier process, I think that much can be gained by reaching out to budding developers and saying: We are going to trust you until you do something that betray our trust. We are also willing to help you make your code secure and standards compliant if you let us.

I have therefore suggested (in #76) that we soften up the system by replacing the current compulsory review with a voluntary one, because I think that human reviewers (assisted by robots) can help those who want to, to learn. As opposed to robots alone, which are just annoying.

chx’s picture

Here's a proposal:

  1. Anyone can publish projects. Projects are not under the security team umbrella by default. This will be made very clear on the project page.
  2. Anyone can ask a project to be moved under the security team umbrella. This means a human review (who will first look at automated runs, of course). Again, the security team umbrella will be clearly and distinctively marked on the project page.
  3. Anyone who has N projects under the security team umbrella is automatically rewarded the "knows how to write secure code" badge/role and then every project of said person, past and future goes under the umbrella. N to be determined (edit: I do not think the current N=1 is a good choice). Other nominating criteria can be added (maintainer of Top 1000 project etc).
  4. Anyone with the "knows how to write secure code" role can do the reviews in #2. This is absolutely crucial. This is a very Drupal-y growth: I mentored you, now go forth and mentor others.
  5. They will be prominently notified of pending project reviews; much more prominently than currently. Given how the sidebar for most projects is longer than the preview page, I recommend putting the review issue queue block (steal it from homebox) below the "View all releases / Add new release / Administer releases" links. Hopefully this constant reminder will get them to actually look at them.

If the above works as I hope it would then, while there will be more applications than now but quite automatically, be a lot more reviewers as well. The quality of Drupal contrib will rise and the work load of the security team will drop. It's like a machine.

Sandboxes remain for experiments, core forks etc to conserve namespace.

Mixologic’s picture

I would like to reiterate what @jthorson said in #66

There is a balancing act between unblocking the contribution path, and maintaining the (accurate or not) public perception regarding high code quality of modules published on Drupal.org.

It is clear that our path from beginning to finally arriving at a "stable" release is challenging for new developers, as well as taxing and time consuming for those who maintain that path. It is also clear that once you have crossed that threshold once, then anything goes, without any further scrutiny.

Instead of thinking in terms of "how can we make the path to stable releases better" - what if we redefined the destination? What if we provided better tools for how the public perceives the code quality of modules on drupal.org instead?

Currently there are only a couple of clear signals for users to evaluate the quality of a module - number of reported installs, and number of downloads, which are good signals to use, but we have many more that we could add.

As a quick initial brainstorm, here are some potential signals we could expose to users to help them evaluate the quality of a module:

  • Usage Statistics
  • Download statistics (probably *not* that useful)
  • Date of last security review
  • Security incident count
  • User reviews and testimonials
  • External links (blog posts/training/demonstrations/howtos etc)
  • Has/passes automated tests
  • Has Documentation
  • Automated code review results
  • Last commit activity
  • Last Issue queue activity/trend
  • Number of active maintainers
  • Sponsoring organization
  • "Maintainer score" - calculated score based on other projects the maintainers own
  • Granular User ratings possibly weighted by maintainer score.
    • code quality
    • duplication
    • documentation

If a user had all this data presented in such a way as to be easily digestible, would we even need gatekeeping? Wouldn't module developers *ask* for code reviews, write more tests, seek other maintainers etc?

Additionally I feel like while this is a noble goal:

Prevent insecure and poorly coded modules from being published on Drupal.org

It seems to me like we need to define what "published" means. In this context I feel like it means "blessed by the security team, at the time the security team evaluated that particular module".

Being able to say that a module downloaded from drupal.org does not have security issues is neither realistic, nor reflective of the current state of affairs as evidenced by the large number of vulnerabilities they continue to discover in "published" modules If instead of the security teams responsibility being to ensure that all "stable releases" are not insecure and poorly coded, wouldn't their time be better spent reviewing the latest patch to a module with hundreds of thousands of users vs. attempting to cover all "published" modules, even those with 4 users?

I, personally, have *never* built a site the relies solely on stable releases. Nearly every drupal site out there is a combination of dev/beta/rc/patched modules alongside "stable" modules with version numbers like 2.25. Stable is only meaningful on paper, but in reality is just an arbitrary line the maintainer decided to draw. I'd be willing to bet that if we were to look at the updates traffic, that the vast majority of sites out there are not running solely "stable" releases, so why behave as if thats what our users need?

I believe we should allow anybody to publish modules on drupal.org, but we must provide more qualitative signals to the end users to decide for themselves whether or not a module is safe, secure, or well written. I believe we should sidestep the concept of "stable" being an achievable state, and recognize that every piece of code on d.o. will always "need work", could potentially have undiscovered vulnerabilities, and is in a variable state of evolution and growth.

gisle’s picture

I think #95 have promise, as it moves away from the illusion that robots will fix this.

But it needs some modifications. In particular item no. 2:

Anyone can ask a project to be moved under the security team umbrella. This means a human review (who will first look at automated runs, of course). Again, the security team umbrella will be clearly and distinctively marked on the project page.

Making voluntary code review attractive and available to (inexperienced) developers is IMHO the key to improving code quality and security.

However, I think it shall be dangerous to advertise that a project is under "the security team umbrella", at it may make some people believe the most recent release of the project has been vetted by the security team. I don't like the "knows how to write secure code"-badge either. What do we do when that person creates a release that contains insecure code (as happens to the best of us from time to time)? Take the badge away?

We know that even projects from very experienced and respected developers may now and them have security problems, so any indicators that a particular project is "secure" or that a particular person always releases secure code is bound to misleading at some points in time. If you must uses badges, use something like "Well respected developer" or similar.

I also like the direction taken in #96: Abandon gatekeeping, provide better tools for the community to evaluate the health of a project, and provide developers with incentives to make secure and clean code.

chx’s picture

The umbrella can be explained -- needs to be explained -- it doesn't mean the code doesn't contain security holes -- everyone knows every codebase contains security holes. The difference is just how many and what's the reaction when one is found. Like "This project was vetted before and security issues will be handled privately by the security team" and "Warning: the security issues of this project will be handled in public".

Mixologic’s picture

Here's some data pulled from the d.o. update statistics database that show that there are very, *very* many dev/unstable/alpha/beta/rc/0.x modules installed.

https://docs.google.com/a/association.drupal.org/spreadsheets/d/1IdEU6sf...

It shows that more than a year after the release of d7, the percentage of installed "unstable" modules was greater than 80%, and that more than four years later, there is still a significant number of "not stable' modules that sites have installed (27%).

fuzzy76’s picture

Making voluntary code review attractive and available to (inexperienced) developers is IMHO the key to improving code quality and security.

You are suggesting increasing the number of reviews drastically compared to the current reviews, which are already understaffed. Especially since these new reviews only will be done by already vetted people, of which there are much fewer.

While you don't believe in some people's (including mine) reliance on automatic reviews, I really dont believe in the reliance on manual reviews. It has never scaled, and the lack of scaling is the reason the current process has gained all this friction and complaints.

vbouchet’s picture

I gone through the review process during the last month for my first module and I currently have another one pending review. My first module has been promoted as a single promotion due to the number of lines in the code.
I really appreciated to have my code reviewed by other people and I learned few things while reviewing other modules. However, it has been a really long process for a module which has only one feedback regarding a notice (fixed one hour after) and a missing README.txt.
However, I don't understand why vetted contributors can promote project without any real review, module duplication check, etc. Fragmentation is a big problem from my point of view.
It would be great to have minimal manual review while asking for promotion. Manual review is not about the code but checking the project page, module duplication, etc.
As suggested by #96, I also think additional metrics and information should be displayed on project pages. Maintenance & development status may be used in a better (and maybe automatic) way. A module with some critical issues with no new stable release from 2y can't be considered as "Maintained" even if the last dev release is 2 days old.

dman’s picture

@vbouchet (slightly off-topic but)
I appreciate your concern that a trusted user is henceforth allowed to promote their own projects without further review. That's a true point to raise.
BUT.
The fact that these vetted user have gone through the hazing that is module review could - in sane and reasonable human beings - remind them that module promotion is a privilege and not a right.
The initial module review is as much educational as anything! Many people miss this factor!

If you treat it as a game achievement that opens up the next level of exploits, with no further consequences, you are playing it wrong.

Every time I (as a developer with the privilege to self-promote) feel the desire to add another module to the growing throng, I know it is my duty to research for duplicates or competing solutions before doing so. And to self-review, or ask for second-opinions on code quality. I invoke this on my own projects monthly.

Because I know how hard-won this privilege was (and still is) I am very careful not to abuse it. I hope that others who have worked their way into my position respect this also.

Not everybody works the same as I, I know, but I choose to believe the best of people until given reason to believe otherwise. If I see you self-promoting without respect to duplication concerns - you *will* be warned or demoted.

gisle’s picture

@fuzzy76 wrote (#100),

While you don't believe in some people's (including mine) reliance on automatic reviews, I really don't believe in the reliance on manual reviews.

What are these automatic reviews that you rely on?

The only one I am aware of is PAReview.sh, which IMNSHO is too anal about line length and C-style comments. I've never seen it catch a single security vulnerability, even when running it on code with more security holes than a colander.

In #37, it is implied that these tools do not exist yet, and are 'to be defined'. But creating automatic review tools that find more true vulnerabilities than false ones are really hard. If we're going to be postpone changes to the project application review process until automatic scanning tools that work have been defined, developed and deployed, I fear we're in for a long wait.

vbouchet’s picture

@dman
While doing review, I noticed at least 20-30% of project application had some discussion about duplication or competition. I also think most people in your position take care of these rules but as vetted contributors can create as much modules as they want, if only 10% of these users don't take care of it and create several modules that can introduce a lot of duplication or poor quality modules.

To return to the global topic, I think adding metrics on the project page or a way to flag modules as duplicate/competing can help to have a more flexible approval process without impacting the global quality.

fuzzy76’s picture

@gisle I meant the reliance in the proposal, not currently. Currently we are living in the flooded world of manual reviews, where (as @vbouchet just told us) even modules too small to give vetted status take a month to get approved.

When it comes to the specifics of PAReview.sh, I don't think anyone has proposed blocking projects on code style violations. But I think they _should_ be shown on the project page, because not matter how small it is an indicator of the care taken when writing the code.

greggmarshall’s picture

Wow, lots of great ideas, not sure I had time to digest all of them, so I apologize in advance if I duplicate observations.

As a module maintainer, I like the idea of having my modules routinely scanned. And I have no problem with an anti-badge if they don't pass. But I also think there needs to be a way to tell the scanning automation that a problem flagged in a previous scan has had human review and should be ignored in future scans. I can see this potentially being abused, but some method needs to exist to not have modules flagged for issues that may not be significant.

As for releases, I may be blind and/or missed an update that includes it, but I cannot remember being told any criteria for how to tag my releases. I remember a general description of the "levels" but nothing that tells me when I should promote a module from -dev to -alpha or from -alpha to -beta, etc. That would be a huge benefit to me and I suspect others. Right now I look at number of installs versus any issues, figuring until I get a reasonable number (I use 50) installs there isn't enough use to find any problems. But I would love "official" guidance and possibly criteria for each level.

I like the idea of being able to package a sandbox project. Perhaps instead of calling it -dev we could call it -sandbox to make it clear in make files, etc that it is only a sandbox project. And I would love to have some statistics collected and displayed, even if just to the maintainer, to show how many downloads or installs my sandbox project(s) have gotten. I have a couple of sandbox projects that are there because I think they are too specialized to be full projects, but if they are being actively used I'd reconsider. Of course someone could file an issue on the sandbox to request it, but I have found people don't file issues as often as they should. I frequently ask other developers I work with if they filed an issue when they "complain" about something related to a module.

And I love the ideas in #96 about selection statistics for modules being better organized, especially if they could be exposed as an API. We have talked about screen scraping some statistics we use from project pages...

jhodgdon’s picture

@greggmarshall: We do have documentation on what release names mean:
- https://www.drupal.org/node/467020 (Drupal Core description of Alpha, Beta, etc.)
- https://www.drupal.org/node/1015226 (Section in "Developing for Drupal" about release naming)

greggmarshall’s picture

@jhodgdon, I remember seeing the "Release naming conventions" page which describes the tags but doesn't include any criteria. The "What are alpha, beta releases and release candidates?" page does have some criteria, but it seems to be heavily oriented to Drupal core. What I'd like to see is guidance as a module developer that makes it clear when a module should move from tag to tag. I know a lot of modules that vary widely in how those tags are being used, some consistency would be great.

davidhernandez’s picture

For those attending DrupalCon Los Angeles, Michael Hess (Security Team Lead) and I will be giving a session about this topic. If you'd like to discuss this more in person, please do attend. We will try to keep the presentation part relatively short, to leave more room for Q&A and group discussion.

https://events.drupal.org/losangeles2015/sessions/revamping-project-appl...

greggles’s picture

FWIW, I pinged npm to find out what they do with a report of a security vulnerability and it's basically 100% hands-off. I think comparing drupal.org to other repositories is interesting, but if you don't compare all their services then it will leave the comparison rather meaningless. IMO, it's a valuable feature that people put faith in Drupal's contrib repository and we need to continue supporting that feature.

I think the reason the security team coordinates issues in stable releases is to provide a stable, predictable way to know if a module has coverage or not. It's also got an intertia problem: it's hard to communicate a change in that policy. So...the release of Drupal 8 is a really good time to communicate changes in policy: semver. Drupal 6 extended support, what about changing which contribs are covered? FWIW, I'm opposed to dynamic measures (e.g. has 1,000 users) but do prefer something that requires some level of mutual agreement: e.g. community granting vetted user and maintainer promoting to a stable release. I'm open to the idea of lowering the bar and using more automated and automatable things to the mix there (e.g. module maintainers can create a stable release if coder finds fewer than X issues per hundred lines and the project has had a non-stable release for 6+ months).

Finally, random anecdote: I created a new account and put it through the process with a random module and it took me two months - I didn't seek any review bonus.

kscheirer’s picture

Regardless of how the mechanics of a review process work, most of the comments above do not address the original post. This is not about a fantasy of what such a process could look like. The below is based on working in the queue sporadically over the last few years. I'm completely in favor of the proposal in the original post.

The process we have actually works pretty well. The issue comes down to the amount of time needed versus time spent. The main problems we actually run into are:

  1. Modules that the programmer is still completing or thinks is done.
  2. Corporations trying to publish a module that implements their service.
  3. People trying to get "git vetted user" status despite not having written any of the code in the application. Can be a corporation as well.
  4. Projects including licensed files.

This process exists because we want to prevent wild people from posting their code on drupal.org as an official module. Most of you are fine. The requirements boil down to 1. Preserving the sanity/time of the Security team and 2. Keeping out the really bad code. 3. the rest: preserving namespaces, fairness, use of tools, contrib vs core, what can sandboxes do, etc. Some applications are odd! Glaring security holes, the Form API twisted to a foul purpose, those who will not concede any demands by our puny Drupal god (may His waffles be praised) regarding their code style, blatant advertising or over-crediting, the list goes on. Poor software design or logic flaws are not among our biggest problems.

Read the original proposal again: the security team agrees to loosen up their review policy a bit, in exchange for getting more prominent warnings and messaging in certain places. More code gets published with better warnings about which is (potentially) security reviewed.

We ask inexperienced reviewers to review other module applications. There's a name for that: a classroom setting. That's what this queue does: teach people who are new to PHP, Drupal, programming in general between the ages of can-write-code and not dead. I think it's one of the best features of the queue. All the bad experiences people have listed are absolutely true, atrocious stuff does happen like waiting a year between reviews, or getting to the end only to get bounced by someone's opinion. You can definitely get unlucky timing during the process, and this needs to be addressed. But if you read the application instructions and actually follow the directions and write decent code, most people (greggles) can get through pretty fast (2-3 months). So it's a classroom with too few instructors (unpaid) and they are overloaded.

1. Modules that the programmer is still completing or thinks is done.
We're actually pretty good at this. A lot of modules come out more polished and usable at the end of this process. Many first-time writers fail to consider internationalization, making use of Drupal's many convenience functions, or a good user interface. Sometimes a module (and its author) spend a long time in the queue for a good reason. This is usually where we clean up security problems as well.

2. Corporations trying to publish a module that implements their service.
This requirement is from Drupal.org's account policies: individual accounts only. This will stop an application until they individualize their user profile.

3. People trying to get "git vetted user" status despite not having written any of the code in the application. Can be a corporation as well.
They don't realize that they should be asking for the "single project promotion" option we offer, and neither has a reviewer until now. So we can't grant "git vetted user" status based on code you didn't write, but we will do our best to promote that project for you. Basically just security, licensing, and minimal API usage check. In these cases, it could be considered that the code has no author at all! If a team is together only long enough to write it, and a review requests changes, the company may not be able to pay for that time. We could do more to help these people, they often abandon the application and host the code on their own site.

4. Projects including licensed files.
This is a requirement from Drupal.org's licensing policies. Nothing licensed is allowed. I dunno how Flowplayer and that other one get away with it, but its gotta be fixable right? We can make it much more convenient now-a-days with drush make and the libraries api. But either way, the process just enforces this requirement.

2 and 4 are not things we will change here. I guess we could, but no one has stated that clearly. 3 is a misunderstanding, companies don't always read everything on drupal.org before writing their code, it's usually done for a client project and that's it. Overall I'd rather have their code available for improvement somewhere, but sadly these can contain glaring security holes, so they end up somewhere else or no where.

Good news, 1 is the easiest to fix!
More reviewers? Application wait times go down.
Less restrictions? Application wait times go down.

We are the Disneyland of the open source world. No other project has as many contributors. It stands to reason that we have the most first-time contributors as well. We face problems of not reading instructions and not understanding requirements. We're trying to get you through the line faster and help more, we need more reviewers and less to review.

klausi’s picture

I'm beginning to implement the proposed changes with regards to module duplication which is not an application blocker anymore. Example: I set #2427287: [D7] Addressfield - Japanese Postal Code back to "needs review" after it was postponed on module duplication.

heddn’s picture

In general, as a current git admin, I'm also starting to implement the proposed changes as it relates to reviewing projects. While not perfect, I think the above suggestions more things in the correct direction. I'd be inclined to go with it for a little while and see what, if any, improvements result.

nicoz’s picture

As someone that went through the review process a year or so ago I can tell you it was a long process to get through. My project just sat there and it was somewhat of a discouraging process. That being said, I would rather have that than a ton of poorly coded modules out there breaking my sites. There needs to be an intermediary state of a project between sandbox and full project. Something that exposes the project to general users but with caution. Actually it might be as simple as only allowing a dev release in the project until the user has had a fully vetted project under their belt. Dev releases won't bog down the security team and it will at least tell others users that "this module may break shit".

darol100’s picture

I would have to agree with @nicoz (#144). The idea of having a warning that said "THIS MODULE MAY BREAKS SOMETHING" or a warning that said "You most be careful with this module because it have not pass a per-review test" its going to be a great idea aware of modules that it can be dangerous. The reason why I agree with @nicoz is to prevent users from writing poor code module and I believe by having a warning it would encourage the user to go to the entire per-review.
I personally think that allowing user to promote their project to a full project without going to a per-review can created many duplicates. I have meet many PHP developers that they start doing Drupal and the created their custom module which can be easily done with another contrib module.

effulgentsia’s picture

My personal perspective (not speaking for the TWG):

Reduce the feeling there are two tiers of users; those who can contribute code freely, and those who cannot.

As others have pointed out, the proposal does not achieve this goal. But I think that's ok. Because why is this even a goal? Yes, there are vetted contributers who can do things that unvetted ones cannot. Just like to drive on public roads in most countries, you need a permit or license, which means you needed to prove to someone official that you know some of the rules of safe driving and are capable of following them. Once you successfully pass the test with one car, your license lets you drive any other car too. In some places, you may drive on your own private property without a license. I think the analogy to sandboxes and full projects is pretty clear.

Reviewers will manually review the project, checking only for problems related to licensing, security, or major problems with Drupal api use. Applications will not be blocked for minor coding standards violations, bugs, or project duplication.

+1 to this. While per above, I support the concept of vetting contributors before they can perform certain actions on drupal.org, I'm definitely in favor of having the process be not too onerous. Just like you can get a driver's license even if you make some minor mistakes on the test.

All users, regardless of their Git-vetted status, will follow this process: Create a sandbox project...Initiate promotion...If the project passes the automated scan...

Since I'm questioning the goal of reducing the vetted/non-vetted distinction, I don't see why this proposal should change anything from status quo for vetted users. Seems out of scope to me. As I understand things, the most pressing problems we need to solve is that many unvetted contributors are finding that:

  1. The current vetting process is too long and frustrating.
  2. While waiting for that long process, sandbox projects aren't fulfilling their needs.

I don't think that adding more process and gates to already vetted contributors solves either of those problems. If there are other problems we're trying to solve (like wanting to strong-arm all contrib modules into obeying all coding standards or to not have any security bugs that can be found by an automated scan), perhaps that should be a discussion moved to a separate policy issue? Not saying those aren't good goals, just wanting to decouple them from the more pressing problems.

Problem #1 is addressed by the earlier blockquote of lightening the review criteria.

So for problem #2:

For non-vetted users, the following restrictions apply:

Only one sandbox project can be promoted to full status.

I can see the rationale for wanting to allow one (rather than 0), as a way to address that. I can also see the rationale for capping it at 1 to start with, and seeing what happens in the wild before raising that cap. However, even allowing the one means that the goal of:

Prevent insecure and poorly coded modules from being published on Drupal.org.

is regressed. On the one hand, this is a goal that can't be met in practice anyway. Just like the driver's license system doesn't prevent all bad driving or all traffic accidents. But, this is a goal that can't be evaluated as a boolean. The licensing system does work to some extent. If we were to allow unlicensed drivers to drive a little bit on public roads, it would probably result in more accidents than we have now. So the question is then how to mitigate that.

The project page will have a visible indicator that the project is maintained by a non-vetted user

That's one way. I'm ok with trying it and seeing what happens. But I'm also ok with entertaining other suggestions, some of which have been raised in earlier comments.

The promoted project will have development snapshots, but cannot have tagged releases. (beta, rc, 1.x, etc.)

I think that's fine as a start if we want to take baby steps, but I also think it would be good to allow some tags, since those are useful for various tools. What about allowing 0.x-unstable-N tags or something like that? But if you want to move from unstable to alpha/beta/rc/full or from 0.x to 1.x, then you need to get vetted.

webchick’s picture

This is an aside, but that driver's license analogy is one I've seen multiple people use against the current policy. Because we treat drivers (or in this case programmers) with umpteen years of experience like 15 year old novice drivers who need to be babied and given oversight, instead of the competent adult drivers they are, who are simply moving from one state to another and need to get their license updated.

So in that respect, I think the parts of the proposal that move us more in that direction are sound.

davidhernandez’s picture

Here is a link to the DrupalConLA session Michael Hess and I gave on this topic. There is an included youtube video of the presentation and extended conversation.

https://events.drupal.org/losangeles2015/sessions/revamping-project-appl...

kscheirer’s picture

Thanks webchick, for that reasoned response to my rant and getting this back on topic, you've clearly had some practice at this :) Though I still can't believe no one was going to mention me praising his waffles.

We do have people come into this process with wildly varying levels of skill. Do we have any data on how the application queue performs? There have been many valid complaints about the long wait times for approval for experienced programmers. I assume there's also a number of pros who sail through relatively quickly, but so far all evidence has been anecdotal.

mikl’s picture

Just my two cents – unaware of this discussion, I wrote an angry rant about the review process that has been floating around the web since.

To be completely clear, this was not meant as personal criticism of anyone involved with the process, and I'm glad to see that changes are under way.

It is my opinion that the community would be best served with as low a barrier to entry as possible:

We have the new community role on Drupal.org. How about we just require that for code publication?

That is, no vetting. No process. No training wheels. I understand this is a radical change, but it is probably also the easiest one to make.

webchick’s picture

Counter-proposals such as the above need a good answer for how they will address the goals painstakingly outlined in the issue summary.

kscheirer’s picture

@mikl - I like the idea of removing barriers. The best part of the application queue is mentoring new contributors, not acting as gatekeepers. However, the first reason the queue exists is to address concerns from the security team. The process can't be removed entirely without addressing that concern.

At this point, I think it's worth pointing out that we don't know for sure that allowing unvetted contributors to post code would be detrimental to security, do we? Is the review process the only way to address this concern? I believe the current thinking is that we want the security team to review contributed modules, and they are not able to handle the volume as it as, less is we open the floodgates.

heddn’s picture

We actually do have some numbers to support the security concerns. See #17 above.

greggles’s picture

I think there are two critical items: security is one of them and we have data that the current process drastically reduces the number of security advisories that get released in a year. Licensing (and general adherence to the Terms Of Service for git) is the other big one.

mikl’s picture

As for security, it's my understanding that the security team only reviews a small subset of the code published on Drupal.org. So changing the policy would only mean that it's a lower percentage being reviewed, all other things being equal.

It's also logical that any improvement in this process would cause more code to be published on d.o, leading to more potential work for the security team.

However, if we simplified the review process, we would free up resources that could be used for the security team.

As for licensing, I don't really know why this is a concern. If Github can allow their users to host code under whatever license, why can't Drupal.org? The legal concerns should be the same, as I understand it.

The more important point about all this, is that it only concerns first-time contributors. It does nothing to prevent people from publishing code with licensing violations or security holes once they get approved.

So why we might prevent some of these instances, we do so at an immense cost to the community. How many people have given up becoming contributors, because of this? Thousands, I'd guess. Plus all the effort sunk into these reviews by volunteers reviewing applications. And if we want to do a new review system, all the effort it takes to build that.

So I understand the concerns. I just don't think this process is a cost-effective way of dealing with them. And maybe we should take the whole thing in the GPL with “NO WARRANTY” seriously, and communicate clearly that downloading stuff from Drupal.org is no more or less safe than getting it off Composer.

I understand the appeal of the Apple-model where nothing is released on their platform which has not been painstakingly reviewed by their app review team. But for all the millions of dollars they pour into that, and hundreds of full-time employees, that's still not a good experience for app developers. I don't see how we can reasonably expect to do better than that with volunteers.

DamienMcKenna’s picture

The security aspects aren't about actively reviewing codebases, it's about supporting projects when a security issue is reported. As can be seen this week, the security team went through the whole security release process for a project with only two sites using it: https://www.drupal.org/project/uc_novalnet_payment This is true of any full project that has stable releases - if someone reports a security issue the security team will handle it the same processes and professionalism as if it were a security problem in a top module like Views or Token.

webchick’s picture

As for security, it's my understanding that the security team only reviews a small subset of the code published on Drupal.org. So changing the policy would only mean that it's a lower percentage being reviewed, all other things being equal.

The security team does not do any code reviews, per se. They manage the process for security announcements for any module with a stable release (ours is the only project with this "feature," AFAIK... and it helps lead to Drupal's generally high reputation for security). See https://www.drupal.org/security-team for more details. Basically, they evaluate incoming issues for their impact, working with maintainers to get releases patched and SAs announced, etc. There is a legit concern that the team is already over-worked as it is, so increasing the number of modules by 1000-fold would risk the integrity of Drupal's security overall. Reviewers of project applications generally are already members of the security team, so stopping the review process will not lead to an increase in resources.

As for licensing, I don't really know why this is a concern. If Github can allow their users to host code under whatever license, why can't Drupal.org? The legal concerns should be the same, as I understand it.

Because all Drupal modules/themes/distros/etc. must be GPLv2+ compatible: https://www.drupal.org/licensing/faq. And the fact that a highly skilled developer who's been around for as many years as you didn't understand this (not picking on you; this licensing stuff is definitely tricky) reinforces the fact that we can't expect our predominantly non-techincal site builder users to understand this. Hence why we enforce the license restriction for all code uploaded to Drupal.org, so users can just download modules/themes without worrying about having a legal staff.

FWIW, I totally agree with you that I'd love the review process to DIAF and that at best it's completely out of touch with the way that developers are used to contributing code, and at worst causes tremendous damage to our contributor community. But so far, no one pushing for this direction has had good answers to the above, nor were they willing/able to do the work. Understanding the sound reasons for why the current process exists is a good first step.

mikl’s picture

Reviewers of project applications generally are already members of the security team, so stopping the review process will not lead to an increase in resources.

I don't see how that follows. If they didn't have to spend time doing reviews, that would free up time for other work, right?

Because all Drupal modules/themes/distros/etc. must be GPLv2+ compatible

Actually, that's not necessarily true. The details of this are murky (no actual case law to settle it), but as I understand it, the GPL requirement does not necessarily extend to all parts of a module, only the parts that interact directly with Drupal's APIs.

So it could (and should) be possible to bundle a JavaScript library with your code, without having that library needing to be GPL licensed (ie. it could be MIT, BSD, ISC or a similar less restrictive license).

And while I understand the appeal of being able to say “everything downloaded from Drupal.org is legally safe to use”, that is not the case today, and it has never been so. Because the review process only prevents first-time contributors from making mistakes (or simply ignoring the rules). And it does nothing to prevent people from posting code that they don't have the legal rights to release under GPL.

cweagans’s picture

I think we're a little out in the weeds here. We have a licensing policy that we follow. This is not the issue to change that fact.

DamienMcKenna’s picture

@mikl: The licensing issue is the Drupal.org standard and there's a whole project & working group / committee designated for working through breaches to make it so.

gisle’s picture

@mikl wrote:

Actually, [that all Drupal modules/themes/distros/etc. must be GPLv2+ compatible is] not necessarily true. The details of this are murky (no actual case law to settle it), but as I understand it, the GPL requirement does not necessarily extend to all parts of a module, only the parts that interact directly with Drupal's APIs.

It is our Git repo policy that requires all components that is included in modules/themes/distros/etc. to be GPLv2+ compatible, not the GPL as such.

However, until that policy is amended, it stands and should be enforced.

I case you're interested, there are another open issue for discussing the Git repo policy - please take a look at: #1856762: Revisit and Redefine Drupal's Policy on hosting 3rd party files and/or files that is not strictly GPL.

mikl’s picture

Yeah, I am aware of this policy. My point was just that this is not necessarily done for legal reason, but rather because we have this policy. Changing that policy is out of scope for this discussion.

I'm not trying to argue about whether or not there's benefits of the review system. I agree that it helps ensure that standards are higher for first-time contributors and avoids some very basic security flaws.

My point is that these benefits do not outweigh the huge downside of losing potential contributors by the droves.

And if we could guarantee people a short turn-around time, I wouldn't even be against the mandatory code reviews (although I wish people would stop rejecting modules over stylistic issues). But the problem is that we can't guarantee anything.

So if we put roadblocks somewhere in the process of publishing your code on d.o, it will unavoidably lead to problems and frustrations.

So with all respect to the security and licensing concerns, I think that they pale in importance, even to the point of insignificance, compared to the all-important goal of attracting new contributors to Drupal.

Anyone doing e-commerce nowadays are all about removing barriers to conversion. So why don't we adopt the same mindset in this area?

mpdonadio’s picture

From #127,

Reviewers of project applications generally are already members of the security team, so stopping the review process will not lead to an increase in resources.

This is not strictly correct. There is intersection between the list of project application review admins and the security team, but among the active reviewers, I think @klausi is the only one on both. I don't think any of the new admins since last summer are on the SecTeam. Part of the mentoring process to become a project application review admin is demonstration of being able to find security problems in applications.

jthorson’s picture

The technical working group would like to thank everyone for their continued feedback on this issue. We recognize that this is a large issue with a number of smaller components and wide-ranging implications; and would like to segment off some portions of the discussion in order to facilitate initial movement forward on some aspects of this topic, while future investigation and discussion can continue on additional potential changes for consideration in the future.

Based on the input provided, along with internal discussion during our TWG meetings over the last few months, we would like to release the following recommendations:

i) As the scope of the project application process is currently limited to 'non-vetted' users, we are not suggesting any policy changes for existing 'vetted' users at this time. Any proposed access changes for vetted users can be discussed in a new policy issue.

ii) For non-vetted users, we are recommending a policy change which allows new contributors to promote one project to 'full project' status, once it passes an automated scan for basic Drupal coding standards compliance, licensing issues, and common security and/or API usage issues. This 'first full project' should be restricted to development snapshots only; with no tagged releases. Discussion on allowing some additional tags (i.e. unstable, alpha, etc.), or extension to allow more than one project for non-vetted users can be discussed in follow-up policy issues.

iii) We are recommending a policy addition that all projects with only a dev snapshot (and no official tags / releases) have a message and/or indicator added to the project page. This indicator should flag that i) projects without releases should be considered unstable and/or experimental, and ii) use of these projects on production sites is discouraged, and done so at the user's own risk.

iv) A new policy issue will be opened to discuss the addition of other indicators/warning messages for project pages, based on certain milestones (for example, passing an automated scan or having Security Advisory coverage).

v) We recommend the 'requesting vetted status' process be adopted as outlined in the issue summary:

  1. Submit a project application request in the Project Application queue. The project submitted for review should have already been promoted to full status, and passed Drupal.org’s automated scan.
  2. Reviewers will manually review the project, checking only for problems related to licensing, security, or major problems with Drupal api use. Applications will not be blocked for minor coding standards violations, bugs, or project duplication.

These recommendations are intended as a 'starting point' for changes to the project application process, and do not represent the full extent of anticipated changes to this process over the long run ... but it is our hope that, in releasing these recommendations, we can begin moving forward with implementation of some of the improvements which have been discussed.

jthorson’s picture

Status: Needs review » Reviewed & tested by the community
kreynen’s picture

automated scan for basic Drupal coding standards compliance, licensing issues, and common security and/or API usage issues.

That sentence could be clearer. Based on what you've written in section v.2, I believe the automated scan only applies to coding standards and that the license, security, and API review is still a manual process.

Is that correct?

kscheirer’s picture

licensing, security, or major problems with Drupal api use

License and security rarely give us any issues, but API review is where it starts to get hazy. I understand the intent, but we should consider defining this term concretely or removing it entirely. Most of the people abusing the API are making more work for themselves anyway, and anything serious will be covered by the security clause, so I'm in favor of removal.

@jthorson and the TWG, +1 for all these changes

@mikl, does this solution satisfy the issues raised in your blog post?

mpdonadio’s picture

@kscheirer, if you look at admin vs non-admin reviews, you will see that most of the reviewers are pretty lenient wrt. the API usage. The problem that I have seen is the peer reviewers who will set things back over trivial matters.

davidhernandez’s picture

@jthorson, in #134 can you explain what is different there from the current issue summary? Or perhaps update the issue summary? The only thing that looks different is i).

mikl’s picture

#137:

@mikl, does this solution satisfy the issues raised in your blog post?

I'd say to some extent. There'll still be restrictions on what a user can do without a review, and as long as there's a manual/human labour part to the review process, we have a significant risk of long review wait times continuing to be an issue, as well as user experience issues, getting it properly explained to the user why he can create some kinds of releases, but not others, etc.

In addition, this change is (by my estimation) a lot more development work to implement, so that will likely mean that it'll be a while before it gets implemented.

webel’s picture

Mikkel Høgh through this article Drupal is still a gated community is my new superhero. It is absolutely spot on, including this:

I know several people who've hit this wall when trying to contribute code. It's not uncommon to wait several months to get someone to review your code. And when it does happen, people are often rejected for tiny code style issues, like not ending their comments with a period or similar.

And this (regarding your many "hoops"):

If there's a more efficient way to make people angry, I can't really think of it.

Every single person who is involved with the Drupal project applications process AND/OR with the Drupal coding standards process should read every word of Mikkel's article, and please know it rings more true than I can describe. For me, the Drupal project application process has become utterly painful, so that I do not wish to have anything to do with your official project submission process anymore (not least if I have to be subjected to some of your more ridiculous coding standards).

And now read this about your insane setter standard and realise that your insistence on forcing people to do things the Drupal way (including the legacy Drupal way) is making some very talented potential contributors incredibly frustrated, and above all (yes, we all work for nothing) you are collectively wasting their time (and I mean months and months of it) unnecessarily ...

... while many of you who have already got full project rights are managing old projects and posting new projects that do not meet the coding standards even closely

David_Rothstein’s picture

Minor coding styling issues (and even major ones, within reason) aren't supposed to block project applications even now, as far as I understand. Maybe part of the problem is that this isn't documented clearly for reviewers? (which is one of the things the proposal here is trying to solve)

I think if someone tries to block an application for minor coding style issues, that's pretty bad and you'd be justified telling them not to do that.

mpdonadio’s picture

Coding style was never supposed to block an application on its own, except for total disregard for Drupal coding standards. I know this is a problem with some of the peer review (which the admins do their best to catch, but we aren't perfect), but I am unaware of any review admins who have pushed back applications because of this.

I just edited the Review Template to make the coding standard with regard to reviews more explicit.

webel’s picture

@mpdonadio

> Coding style was never supposed to block an application on its own, ..

But what is clearly happening is that people check first whether a project has passed the pareview/Coder script and don't pursue projects that have not already.

I have made the suggestion that one could have different levels of compliance and Coder Compliance Profiles: #2306733: Advanced feature idea: Coder strict mode (applies all rules) and user selectable rule switches, together with Coder configuration file for submission with a module. I have also suggested that projects that meet a certain level could be advertised with a compliance icon. Something like "Meets Drupal Core Coding Standard" or alternatively "Meets the Webel UML-friendly PHP coding standard for Drupal".

I am for the entire requirement in any shape of form that new projects be subjected to any Coder measure be removed, because people who already have "they keys to the kingdom" are publishing modules that don't meet the standards anyway. It is not fair.

[Edit: fixed incorrect links and adjusted time statement below]

My application has been held up since Aug 7, 2014, about one year

Project Applications: [D7] ooe:

My own sandbox project OOE = Object Oriented Examples = One Of Each, apart from being unusual in so far that it has a special role, to act as an educational demo of a truly OO bridge to D7 (with graphical UML diagrams), is also being held up because nobody so far has embraced my tiny patch to D7 core to support OO methods as form callbacks:

- #2165895: drupal_retrieve_form() issues Undefined index: warning on line 764 when static class method used as $form_id: hinders use of object-oriented FormController

- #2166371: Drupal 7.36: form.inc 1471: form_execute_handlers($type, &$form, &$form_state): $function($form, $form_state); Does not work for static class method 'Drupal\mymodule\FormContoller::handler'

I suspect this discourages peoples from reviewing the module, even though I've tried to make it easier by providing both a patch to D7, and a complete patched download bundle for D7+OOE so they can easily test it.

Or somebody involved with core could please finally take my (for others harmless) D7 core patch seriously, and realise that it was not correct in the first place to assume in D7 that form callbacks had to be plain non-OO functions. It would take minutes to fix, and would mean a lot to me after waiting about 12 months for some Project Application rights love.

Fabianx’s picture

#144:

I have reviewed your application and the reason it was hold up is in my opinion that you provided a huge code base to review.

We should add somewhere that project applications should be in a scope that is still reviewable.

(e.g. the same as we frown upon 400 kB core patches ... )

webel’s picture

@Fabianx wrote:

> I have reviewed your application and the reason it was hold up is in my opinion that you provided a huge code base to review.

Many thanks, your constructive suggestions there are most welcome, but it will take me some time to address, for example, isolating part of my large project so that others can process it more easily (or as I suggested there, even at this late stage start from scatch with another smaller module like Flag Plus)

> We should add somewhere that project applications should be in a scope that is still reviewable.

Yes, or (not for Fabianx, but of the whole of Drupal.org) ...

Or there is another way

Of course, Drupal.org could finally get rid of this incredibly painful Project Applications process that almost no other open source effort uses, so that people can simply contribute instead of having their time massively wasted, as argued by someone else very well here Drupal is still a gated community.

As far as I am concerned, this entire review process is nothing but a painful imposition that should be chucked in the bin.

So we can give. Easily. Without fuss. Without this pain.

Because the Project Applications process does not work; it just turns good people off

Webel

webel’s picture

@Fabianx wrote:

> We should add somewhere that project applications should be in a scope that is still reviewable.

I appreciate this is perhaps a bit off topic, and not intended as a reply to this constructive suggestions, but if the Views module, being perhaps the most useful module ever in the history of Drupal, but quite complex, were to have to penetrate the Project Applications circus, would it languish on the queue almost completely without useful feedback for over a year ?

Of here's a radical suggestion. You could make exceptions. You could do things like this:

"Wow, this Webel guy is clearly really keen, he's mapped out most of the essential parts of the Drupal7 core in an object oriented bridge, and on his own time, unpaid, out os passion. He is clearly the kind of smart person we want, but he is being massively inconvenienced by the fact that his innovative approach does not tick all the boxes. We could simply flip the switch, save him any more pain, and grant him, as a special case full Project rights"

Before he is dead due to old age. Or leaves. Or both.

pingwin4eg’s picture

The main reason that the project application process stuck is absence of review bonus. @webel you could try help other projects with reviews. Complaining about "bad" process is not the right way to bring attention to your project.

mikl’s picture

@pingwin4eg so in other words, you have to grovel before the review masters to have a chance. Thanks for affirming a big point of my article.

Fabianx’s picture

#147: Yes, exactly that would happen. Views would indeed be sitting in the queue for years - unless it had a very large user base.

Given it took a team of 4 members full-time months to bring views into core, it would have never passed the project application process.

However what probably would have happened is that someone would have asked to be a co-maintainer and sponsored the project to be a full project.

That would not have helped the individual or team writing the views, but it would have helped the project itself.

--

The problem with such exceptions is:

- Where do they start and where do they end?

In that case the best thing is really to get some people interested and get a sponsor for your mega-project and just submit a smaller one that is reviewable in scope.

Also you overall just need _one_ other person that reviews your code and that peer-checks it for functionality, security issues and basic standard adherence.

Yes that is a problem for 'lone riders', but if you start being a part of the Drupal community at large that usually comes naturally, it can theoretically even be a co-worker, etc.

And the new guidelines discussed here, should help in any case :).

----

What I get out of all of this is the following insight:

- The project application process is not to vet projects itself, it is to vet if individuals are responsible maintainers that adhere to the policies and basic standards.

... very similar to the reputation systems that other communities like stackoverflow have ...

And maybe a different focus (maybe this is already the case) would help in general - plus the policy discussed here.

webel’s picture

> @webel you could try help other projects with reviews.

Yeah, I haven't spent enough time already contributing to Drupal.org, such as not being perhaps the person who has spent more time than anyone in the coding standards and Coder issue queues.

Person A: I submitted an irrelevant little module that I contrived that serves almost no useful purpose. I made it through the project application process quickly because there was almost nothing to consider. I have barely anything to contribute anyway, but now I can write god-awful code that nobody even inspects any more, because now I have the "keys to the kingdom". I can even say at job interviews that I have a contributed module to show.

Person B: I have been working with Drupal for some time, had a good idea for a real contributed module, but this is taking a lot longer than I thought it would, some months. I found the coding standards a bit of a pain but got through it with the help of a plugin for NetBeans, but the robot script still complains about some things, so people tend to not look at my project to review, they expect me to deal with the robot's coding standard review first.I am still waiting, It is frustrating. I have more to contribute, but this is holding me up a lot. I have heard that some years ago nobody had to do it. I don't understand why there are a bunch of people who already code modules in non-standard ways anyway. I expect I'll have to wait a few months more.

Person C: I am an extremely experienced internationally acknowledged expert in OO and model driven development. As a result I don't agree with all of the coding standards, and in the same way Einstein could not use the notations of "flatland' with relativistic physics and had to use four-vector notation and tensor notation, I have to use advanced notations and other languages like UML, and I simply don't agree with and can't always use the coding standards of Drupal for PHP (which is not the fault of PHP as a language). I am using novel techniques and innovative approaches which is doing the head in of reviewers. I have a module that demonstrates for the first time ever a true API for Drupal, but it is necessarily quite complex, so I will have to wait for years to get full project rights, even thoughI have worked with Drupal for nearly a decade and have developed custom modules professionally and for private user for many years, and have also worked with other developers here on Drupal.org who have made it though the magic gateway. I claim that my module is one of the most important educational demonstrations in the history of Drupal, and it is designed to make a major contribution to model-driven engineering for PHP, way beyond Drupal. I am now told that such a contribution is not suitable for the Project Applications. So having spent nearly a year waiting, and a large amount of time on the module itself (including trying to make core more D7 OO friendly and supplying patches for it, which have all been ignored) will now have to spend more time either adapting my large module to cut out a small part of it, or revoking my entire Project Application and starting again with another module.

Or Person C could review other projects, which is the very game that the article I referenced rightly criticises

webel’s picture

> @webel you could try help other projects with reviews.

PS: Or my unpaid time could be better spent doing the model-driven software engineering that most other people can't, and reviewing Drupal8 in UML and SysM, using the advanced graphical software engineering tools I helped develop.

In fact, I would reckon that most people's time could be better spent, so that it could be used for other Drupal work, than doing those reviews.

webel’s picture

> @webel you could try help other projects with reviews.

If Drupal offered drivers licences

A newcomer wanting a driver's licence can EITHER:

1. Sit a test with an experienced instructor driver, who already has a driver's licence.

OR, if there are not enough experienced instructor drivers to go around, and they are mostly too busy:

2. Without a driver's licence, act as a tester (but without having a driver's licence), and help issue driving licences to .. ahem .. other newcomers.

I am sure the roads would be much safer this way, wonder why nobody has thought of it before, better than bribery, blackmail, and nepotism combined.

From Mikkel's external post again (because it is spot on):

The common answer to people that are frustrated with the waiting time, is that they should go get some brownie points to expect getting reviewed any time soon.

Basically, the messaging goes a bit like this:

"Oh, you're unsatisfied with having to jump through all these hoops to contribute to our open source project? Here's some more hoops to jump through."

If there's a more efficient way to make people angry, I can't really think of it

Shame I don't have any pictures of hoops handy; I am sure as hell sick and tired enough of jumping through them already for the sake of getting full Project rights to not want to suffer jumping through any more. I have surely done enough.

pingwin4eg’s picture

First of all you should understand that Drupal is the community and not just group of individuals with huge ambitions. And there are certain established standards which were proven by the community within years. The standards are needed to quickly understand each other's code while working on projects together as the community.

I'm sorry @webel, but it seemed to me that you behave like "I know what I'm doing better than all of the community".

Of course you can propose changes to standards but in appropriate issue queues and in rational manner. For that you don't need to have a permission to create full projects.

We all understand that the current state of things in project applications process is not perfect and the process needs changes. But let's think of how any of us can help and propose constructive solutions.

Review bonus program is not the groveling, it is the way to help other applicants, to prioritize your own project application, and to get a big experience in writing secure and API-consistent code. IMHO it is the best currently working solution to sort the applications queue.

So let's not flood in this issue, OK?

mikl’s picture

@pingwin4eg, as long as you're putting pressure on people to do something, it does not qualify as “helping”. It's just bowing to the demands of those in power. And this very thread is for discussing this policy, so kindly stop accusing anyone of “flooding” for voicing their complaints.

As for constructive solutions, I suggest you (re)read my article. I've proposed a very simple solution which could be implemented without boiling the ocean.

jthorson’s picture

None of the recent comments on this thread are specific to the proposal in the issue summary... Which has been discussed and agreed to at various Drupal governance working groups.

This thread is NOT a place to bring your complaints and personal soapboxes. I have been actively involved in trying to change this process since my initial introduction into the community many years ago, and nothing kills any momentum we ever gain faster than an onslaught of whining, complaining, and rehashing discussions that have happened at least a dozen times already. This does not foster an environment where change can flourish ... It breeds a toxic environment that even those actively trying to improve the situation end up wanting nothing to do with, and so they walk away and nothing changes.

I would implore anyone contributing to this thread ... Please try to keep your comments focused on the specifics of the proposal as presented ... If all you want to do is gripe and point the problems with the process as it stands today, then I suggest you take yourself to the g.d.o/code-review page, where you'll find a dozen other threads full of complaints ... Read through them all, and then, if you have something that hasn't already been discussed to death, add it there.

This is the TWG queue, and there is a TWG proposal on the table ... Can we please focus on the merits of that proposal, and take any thread hijacking elsewhere?

(Note: this post is made from a personal perspective, not related to my role as a member of the TWG ... and as such, should not be interpreted as me speaking on behalf of the working group.)

webchick’s picture

I think what's needed to close this out is:

1) An up-to-date issue summary that includes the new agreed-upon specifics of #134.
2) Follow-ups for all the "next steps" in the discussion.

and probably also:

3) A FAQ page that explains why the process is set up the way it is and what ground rules any new process tweaks would need to adhere to. This way people who are extremely frustrated can try and work with the people working on making the process less onerous, without doing what most of them do (as evidenced here) which is totally disregard any such rationale and throw around insults. We can't really blame them though if there's not an easy read that explains what and why.

mikl’s picture

Well, to address jthorsons accusations, I'd say I resent your recasting my argument as “personal soapboxes”.

As for the security concerns, there's a lot of FUD around that, but no one seems to explain why Drupal.org should have more bureaucracy in this regard than other software distribution systems like Composer, NPM and the like.

Initial code reviews would protect us only from security holes in new modules by new authors. Which means almost no impact on the community at large.

So while I agree that the proposed solution would alleviate some of the problems with the current process, it is also a new and very complex setup, that will likely take several person-months of developer time to implement, and probably even more effort to test and deploy. Which means it eats up a lot of resources that could be used on other things, and also that it will probably not happen any time soon.

So while I understand that you find the continued discussion tiring and even annoying, try to imagine what's it's like being on the other side of the Great Wall of the Code Review Process, being told that the process will be somewhat less onerous at some indeterminate point in the future.

klausi’s picture

@mikl: of course the project application process has a huge impact on the community at large: We detected 400 security issues in the project application process over the last 4 years, and that are only the confirmed and tagged security issues. If you calculate several hours per issue that the security team would have spent otherwise with coordinating maintainers and security advisories + the time of the community reading all those advisories and applying security updates you get a pretty decent impact.

So when you talk about dropping the wall please also always mention that we would drop security support for contrib code, otherwise we are shifting quite a bit of new work to the security team.

mikl’s picture

Well, how big are those modules? There's a lot of modules who are only installed on a handful of sites. So in that light, those 400 security issues would maybe only have affected a few thousand Drupal installations. That's all well and good, but given that Drupal sites number in the millions, can we truly say all the work that went into those 400 issues is well spent? Especially considering that we're basically turning away contributors with the process.

Your underlying assumption is that the security team has to support everything published as a stable release on d.o, but I don't see why that's necessary, or even reasonable. Why should the sec team be obliged support modules that are only installed on five sites?

I think it would make more sense to make the security team support a value added thing, rather than serving as a brake on our community growth. Because that's what this boils down to, in my opinion.

The community growth is too big for the security team to handle. So right now, we're putting the brakes on growth through bureaucracy, rather than revising the security model into something that can scale with growth.

webchick’s picture

So one of the said follow-up issues mentioned in #157 should be "Explore alternate methods for the security team to support contrib that is less taxing" or something to that effect, where we could post suggestions such as #160.

However, #160 is off-topic for the purpose of this thread, as far as I can tell, as the TWG has repeatedly asked for us to keep the focus on the actual proposal here.

kattekrab’s picture

Public Service Announcement.

Hi all,

This issue has become heated, and somewhat unproductive. I think that's understandable given the real commitment and passion displayed by everyone on making Drupal better. That's what you've all got in common here.

So, this is a gentle reminder to everyone to please review our Code of Conduct.

Yes, the project application process is broken, and we're all trying, through consensus to improve it. As @webchick points out in #157 let's work together to update the Issue Summary, and outline where we have consensus, and where we're still struggling to agree on a path forward.

But can we please stop the name calling and accusations? Collaboration, consensus and good will are the key ingredients we need now.

We all want to solve this.

cheers
Donna
(chair, Community Working Group)

DamienMcKenna’s picture

Just as a reminder of what's at stake here - there are currently 82 RTBC project applications, 114 that are In Review and another 85 that Need Work. As part of the work on this process could we please have the project admins go through and approve all of the RTBC projects, some of which have been in that state for months? Thank you.

I've created a new issue to discuss the security team aspects of this discussion: #2532062: [policy, no patch] Changes to the security team process / tools to reduce workload from increased project applications

DamienMcKenna’s picture

I created another discussion related to automatic testing: #2532162: [policy, no patch] Mandate all projects have Automated Testing enabled

DamienMcKenna’s picture

I created another discussion related to project releases requiring tests pass (#2532166: [policy, no patch] Mandate all project releases pass tests) and another to add code reviews as part of the automated testing (#2532246: [policy, no patch] Add code reviews as part of Automated Testing).

DamienMcKenna’s picture

I've grouped my various ideas for improving the review process under one umbrella: #2532352: [policy, no patch] Simplify, automate parts of the project application & release processes

Apologies for working backwards.

greggles’s picture

re #160:

Your underlying assumption is that the security team has to support everything published as a stable release on d.o, but I don't see why that's necessary, or even reasonable. Why should the sec team be obliged support modules that are only installed on five sites?

In #110 I covered how we are in the current position and some of what it would take to get to a change:

I think the reason the security team coordinates issues in stable releases is to provide a stable, predictable way to know if a module has coverage or not. It's also got an intertia problem: it's hard to communicate a change in that policy. So...the release of Drupal 8 is a really good time to communicate changes in policy: [e.g.] semver, Drupal 6 extended support, what about changing which contribs are covered? FWIW, I'm opposed to dynamic measures (e.g. has 1,000 users) but do prefer something that requires some level of mutual agreement: e.g. community granting vetted user and maintainer promoting to a stable release. I'm open to the idea of lowering the bar and using more automated and automatable things to the mix there (e.g. module maintainers can create a stable release if coder finds fewer than X issues per hundred lines and the project has had a non-stable release for 6+ months).

This issue is probably not the place to discuss which releases get security team coverage - that's a sufficiently complicated issue and I don't think the answer to it would affect this proposal (which is sound, and a step in the right direction even if not a perfect solution).

webel’s picture

@DamienMcKenna

.. could we please have the project admins go through and approve all of the RTBC projects, some of which have been in that state for months? Thank you.

+ (10)^(10^100) [or + a googolplex]

BTW it is in some cases over a year (especially larger projects that may have more to contribute)

kattekrab’s picture

As suggested by @webchick in #157 I'm going to have a go at updating the issue summary over the next day or so. It seems we are actually close to an agreed approach which will address many of the pain points that have been identified.

The community working group discussed this issue at length this week and we want to acknowledge a few things.

  1. This is a long standing issue.
  2. It's complex and impacts many different stakeholders.
  3. Our peer review process for hosting projects on D.O is valuable, and sets us apart from other open source communities whose third party extensions, plugins and modules are fragmented.
  4. It's a cause of much frustration which leads to conflict.
  5. It's a barrier to entry for new contributors.
  6. Volunteer time is limited, and reviewing is often thankless and undervalued.
  7. It's hard to find information about how it works and how to help.
  8. Novice reviewers knock back projects to needs work for minor issues.

Thanks to
@greggles for the reminder the specifics around security implications are best discussed elsewhere.
@DamienMcKenna for posting followups that will make the IS update much easier.
the whole TWG for painstakingly working through this.

Apologies to all who are still frustrated. However I ask of you a little more patience while we fix this. Because we ARE fixing it.

cweagans’s picture

davidhernandez’s picture

Quick note, we've been having a series of all-D.o working groups (https://www.drupal.org/governance/drupalorg-working-groups) meetings that included (quickly scanning this issue) at least jthorson, webchick, and myself. The purpose is to discuss priorities for the D.o roadmap, and that included the changes in this proposal. So while it is difficult to always know what gears are turning in the background, just know that it is being worked on.

I do want to agree with some of the sentiment of jthorson's earlier post. His comment may come off to many as agitated, but I sympathize greatly. Having had various levels of involvement in the process over the years (and, yes, I've actually gone through it) I know what little satisfaction there is with the current process, but also know how hard it has been to make changes. And Jeremy has been fighting for a very long time to make changes, as have many other people. Please do recognize that there isn't a single person involved in this issue that doesn't care a great deal about getting this fixed. That is why many of us are trying to avoid derailing any progress we're already making.

My main focus (we really started pushing for this almost a year ago now) has been to move forward in any way we can. Because of the complexity of this issue, every step we take will have to be the result of compromise, but it is important that we keep taking steps forward. In the past we've only ever done what our community is very very good at, which is to discuss a problem ad nauseam until we hit a wall. We make no progress that way. Instead, we should be focusing on where we align, less on where we don't, and make progress in that direction.

We let governance do its job, and, for the first time ever, we have a plan. It has been approved by the necessary working groups. We are going to implement it. Reviewers have already started implementing their part. What we need is progress on the technical changes. Michael Hess and Neil Drumm could use help getting the code scanner production ready. If you are capable of helping, please please contact them. It is the single most important piece we need in place, and the more progress we make the better. Then, as we start to actually see the affect of these changes, we can evaluate whether we are on the right course. If not, we will discuss more changes, and implement them one step at a time. Nothing here is written in stone for all of time, but our lack of progress is becoming dangerously close to it.

Fabianx’s picture

#170: Thanks so much, you are a hero! (That innocent "done" means actually that he reviewed and fixed the 82 RTBC issues ... )

cweagans’s picture

I'd like to make one suggestion to the proposal. Requesting vetted status - if an application has been sitting in the RTBC state for > 4 weeks, it should just be auto-accepted. Part of the queue that I cleared out today had applications sitting in RTBC for over a year, and that's a huge fail. If nobody objects within 4 weeks, then I think it's safe to assume that nobody objects at all. 4 weeks is still a long time, but I think it's pretty reasonable given that the project application queue is still volunteer driven.

If this doesn't happen now, I won't be too torn up about it. I'm keeping an eye on this page https://www.drupal.org/project/issues/projectapplications?text=&status=1... with the intent to keep it empty

kattekrab’s picture

I've spent a solid couple of hours today reading through the whole thread, and looking at some of the older issues that also explore the project application review process.

The spin off and follow up issues created by @DamienKcKenna (in #163 - #166) have been added as related issues.

I intended to update the issue summary, but I don't feel capable of doing so. There are no substantial differences from what's proposed in the issue summary as far as I can tell, however there is uncertainty and lack of consensus on how it might apply to existing vetted users. Which I would think is kind of out of scope anyway.

Key comments synthesising the discussion are at #67 by @gabesullice and #134 by @jthorson.

Here's the notes I took while reading, in case that's helpful to someone else as signposts to the discussion.

#10 @DamienMcKenna - force patch testing

#8 @gisle - only full projects with tagged releases can be translated [edited]

#16 @cweagans - add a "help me improve my module" button

#19 @cweagans - can we add security scan to the auto review?

#25 @gabesullice - can we have a way to flag duplicates somehow?

#31 @kreynan - raises concern about the review bonus

"These developers don't understand why these "unskilled contributors" looking for a bonus are providing feedback. By the time someone more skilled even looks at the issue, it can be too late. The developer has moved on thinking "if that's the type of person responsible for code on Drupal.org, is this really something I want to be involved in"."

#52 @randallknutson - explores different approaches taken by other projects

#62 @DamienMcKenna - adds followup/spin off purge squatters

#67 @gabesullice - adds a good summary up to this point

#134 @jthorson - final proposal for action followed by RTBC

#139 @davidhernandez - what's different to IS? Only point i) ?

#158 @mikl - agrees this is an improvement, but still concerned it's going to take a lot of time & effort to implement.

"...imagine what's it's like being on the other side of the Great Wall of the Code Review Process, being told that the process will be somewhat less onerous at some indeterminate point in the future."

#171 @davidhernandez - let's move this forward

#173 @cweagans - suggests adding that if a project is RTBC for a month, it should be auto promoted.

[edit - clean up for clarity]

kattekrab’s picture

Issue summary: View changes

Adding these words to the issue summary.

<strong>Existing Vetted Users</strong>
<ul><li>Users that already have vetted status will not be effected at this stage, other than now having access to automated review of their code before making available for release</li></ul>

Have moved the preamble to a "background" section, so the proposal itself is at the top. I think it would be good if David and Jeremy could review, and resolve the question as to what (if anything) else is substantially different from the discussions, and roll that in too.

Putting back to needs review in order to finalise the policy.

kattekrab’s picture

Issue summary: View changes
kattekrab’s picture

Status: Reviewed & tested by the community » Needs review

and actually doing the status update as well.

gisle’s picture

In #174, you're misquoting #8. It is not about full projects. A full project can not have translations on https://localize.drupal.org/ unless it is allowed to have tagged releases.

My comment in #8 tries to address the following bullet point for non-vetted users in the issue summary:

The promoted project will have development snapshots, but cannot have tagged releases. (beta, rc, 1.x, etc.)

This could be mitigated by allowed non-vetted developers to create tagged releases, but not stable releases (as in 1.0, 1.1, 2.0, etc.), only tagged unstable releases (beta1, rc1, etc.).

If combined with #2457643: Only allow releases with security coverage to be recommended, it would make it clearer that unstable releases was not under the security team umbrella.

joachim’s picture

Issue summary: View changes

Just fixed a typo in the summary.

kattekrab’s picture

@gisle - thanks for clarifying the details around translations and tagged releases in #8 - that seems important to me, and should be addressed in the finalised policy. Edited #174 to say "with tagged releases" to make that point clear.

Homotechsual’s picture

I've read through this issue quite thoroughly and I have a couple of observations:

  • Drupal (Core and Contrib) produces reliable high quality code - this is due massively due to the review process, yes ok, it's annoying, it's burdensome but it's educational, enlightening and one of the major "plusses" we can use when selling Drupal to our clients/bosses etc.
  • There are problems with the process and the resources required to run it, but it would be a huge regression in terms of security, reputation and quality to dismantle the process entirely.
fuzzy76’s picture

So it's been 9 months... What are the next steps here? What is holding this back?

jthorson’s picture

The TWG has a final direction on this, and last year vetted it with both the drupal.org software working group and the security team.

We are in the process of pulling together a final announcement and update, and will be sharing this with the community shortly. We recognize that this issue has dragged on much longer than it otherwise should have, and as such, want to take an extra step of re-confirming alignment regarding the final announcement draft with the various parties involved in developing the process, prior to publishing.

We thank you in advance for your patience.

kattekrab’s picture

Any news @jthorson?

Yesterday at DrupalGov I was made aware of another instance of a good module languishing in our queue, repeatedly being pushed back to needs work for violating comment line length - which the author eventually made available on github.

It would be great to see this issue resolved soon.

Let me know if there's any assistance I can provide.

fuzzy76’s picture

Having unvetted, random people "off the street" doing reviews (because your volunteer-driven process can't scale) will have that effect.

jthorson’s picture

I believe it's fair to say that we have consensus as far as the desired outcome (though there are still a few people who were involved in the discussion that I haven't managed to touch base with) ... so I'm currently in the process of taking this desired end state, and developing a task list and proposed roadmap as to how we accomplish it.

Because the majority of these tasks are either dependent on, or directly related to, changes to Drupal.org; I anticipate submitting this roadmap to the DA as a proposed Community-Initiative, in accordance with their new Community Initiative process ... and anticipate it sneaking into the DA team's second quarter workplan.

This does not mean that the DA staff will implement the roadmap themselves; as I'd suggest this will move much faster with the support of community volunteers working on implementing the required changes ... but this will assure assignment of a DA Staff liason to help guide those volunteers and ensure alignment with DA design philosophies and security & deployment requirements.

Once that roadmap is complete, I will add it to the issue summary as a final TWG recommendation; after which this issue will be closed and discussion moved over to a new issue focused on the implementation efforts ... whether that will fall into the drupal.org project or projectapplications queue is yet to be determined.

jthorson’s picture

The Community initiative post can be found here #2666584: [Community Initiative Proposal] Project Applications Process Revamp. Ideally, what this needs is a community champion. There are plenty of people willing to talk or complain about the process, but we need people to sit down and do the heavy lifting.

The google doc Roadmap can be found at https://docs.google.com/document/d/1eaI0Jq5CIVClCHv4uAxbJVHky5twPkWOqJ-A... ... constructive comments welcomed.

mlncn’s picture

Any process that treats new contributors differently from existing contributors is pretty much guaranteed to end up treating new contributors badly— separate is not equal.

The present proposal at best pushes the pain of a segregated process to the first release or the second module.

Instead, let's support what cweagans, gisle, attiks, randallknutson, kreynen, webel, and others have said on this thread in opposition to two tiers and to ask that we strengthen the goal to "Reduce the feeling there are two tiers of users; those who can contribute code freely, and those who cannot" to "End separate standards for new and established users". This doesn't mean we eliminate standards, and for instance the proposal by chx in #95 isn't a separate set of standards, but a standard available for everyone to opt into.

An even simpler system that would be the same for everyone, that moves from project reviews as a highly imperfect proxy for person reviews to just project reviews, is that a project must have one endorser (who has a full project themselves), other than the author, to become a full project and/or to have a first release. This is easy for any established member of the community, and requires only that any new contributor connect with the community enough to find one person to publicly vouch for one's ability to maintain the module. If existing contributors, who know the ropes and have connections, find this too onerous then it certainly is too onerous for new contributors.

Have guidelines that emphasize security and a process for removing the privilege of approving others' projects for those who abuse it; make full project reviews an optional but prominent badge of honor, which follow the same approach and rules followed now for project application reviews; and we'll have a system that educates and involves every Drupal contributor. #2532062: [policy, no patch] Changes to the security team process / tools to reduce workload from increased project applications is a good idea in its own right but not a prerequisite; we'll learn if smoothing the project release process forces that issue.

I am willing to be community champion for this but only if we are aiming for a process that puts us all in the same incentive structure together, rather than condemning our newest contributors to anything that would be, however nicely furnished, a newbies-only lobby that too many of us, however well-intentioned, won't visit often enough.

mikl’s picture

Seems we're at an impasse. Those who want a complicated review process aren't willing to actually do the work required to get a better system. But they're also not willing to step aside and let us remove the current system, and the pro-bureaucracy people have enough political pull to make sure we won't get the system removed.

And since Drupal doesn't have leadership willing to step in and break ties like these, this will likely not be resolved any time soon. Eventually, we might get a slightly less newcomer-hostile system, once DA throws enough effort and money at the problem. What a tragic waste, when you consider that disabling the review process could be done with barely any effort at all.

greggles’s picture

@mikl, I invite you and anyone else to join the teams who would have to clean up after a completely open system and spend a few months in those roles and see if you have the same perspective. It's also unfair to describe people in favor of this policy with "pro-bureaucracy" or "hostile towards newcomers" and it doesn't move the issue forward.

mikl’s picture

@greggles, I don't buy your premise that there would be anything in particular needing clean up, except for spam, which I think the D.o. webmasters already have a working process for.

As for calling those in favour of complicated processes “pro-bureaucracy”, I don't that completely unreasonable, and I think it's quite clear that the current system is hostile to beginners.

To be clear, I have the greatest respect for your efforts on the security team, but the current approach simply doesn't scale, and given the choice between relaxing our security standards vs. scaring away new contributors, the contributors should win. Because an open source project that does not effectively recruit new people, is an one that's well on its way to the dustbin of history.

kattekrab’s picture

I'm also willing to champion this issue.

Whilst I'm not a developer, I chair the community working group, and sit on the board of the Drupal Association.

I personally believe this single issue has significantly stalled increased participation in our project as we've been hamstrung by our inability to scale.

I am deeply sensitive to ALL the perspectives raised here, and know we need to tread carefully, and respectfully.

But no real visible progress has been made in almost a year. And quite frankly, I find it heartbreaking.

So - @mlncn - I'm in - should we try to coordinate a time on IRC to plan steps forward?

geerlingguy’s picture

@kattekrab - maybe we could also set some sort of milestone/deadline for DrupalCon New Orleans, and make this a main part of whatever community summit we have there? Even if we figure out some solutions in the interim, I think we (as a community) will need a time to hash out the finer details and get some solid work done towards implementing whatever changes are needed (maybe bleeding over into the sprints).

kattekrab’s picture

@geerlingguy - that's a great suggestion!

The DA has actually just asked me if I'd be willing to facilitate the community summit again at New Orleans, to which I said yes. But having time and space there to discuss this (it nearly always comes up there anyway) and then use the sprints to move it forward or finalise, is a great idea.

However, a lot of work has already been done here, let's not lose site of that, or start from scratch.

I really want this fixed quickly though.

Today - there are 302 open issues in the queue
https://www.drupal.org/project/issues/projectapplications?categories=All

Around 100 of them are RTBC, 13 of those were created over a year ago. One of them was created 2 years and 7 months ago :(

And now that we automatically close issues that have no new activity on them after some time... no idea how many others are languishing forgotten.

kreynen’s picture

...or start from scratch.

It's difficult to have open/inclusive conversations about topics like this at a conference without telling people that haven't been following the issues that more input isn't helpful. You can't rehash these issues again and also expect to make any progress on solving them.

Unfortunately I'm not going to be at NOLA, but I wanted to draw attention to the changes made to #2580729: Start decoupling Organization nodes from Marketplace. If credit for high quality reviews influenced organizational ratings, you'd likely see more organizations willing to let their developers spend time reviewing projects on the clock. I know I'd do more reviews if it meant the University of Colorado would be listed before PennState and Waterloo. Similarly if orgs were ranked by the number of vetted git users, that would encourage more of them to require the developers they hire to go through the review process.

As critical as I am of the current state of PAR, we have made getting 'vetted git' access part of the process all new developers go through at the University of Colorado Boulder. We currently waiting on #2646234: [D7] User External Invite. The initial reviews of the project were well informed and very helpful to improve the project, but like most PAR issues the project just sits.

Luckily we don't need PAR to promote projects or this would actually be impacting our options of using this code in https://www.drupal.org/project/express. Like most large organizations, we can promote projects on Drupal.org owned by our organization account even when the developer doing the work isn't vetted. Any regression in that loophole in this process and we'll host our work exclusively on Github.

tvn’s picture

catch’s picture

One relatively small change discussed here would be showing automated testing results more prominently on project pages, have been thinking about that for a while, so opened #2688127: Make automated testing results more prominent on project pages.

jthorson’s picture

Status: Needs review » Closed (fixed)

I'm going to suggest that this particular issue (i.e. the project applications discussion at the Technical Working Group level, specifically) has run it's course.

A next step has been defined, one which attempts to help free the immediate project applications queue logjam, but still acknowledges multiple concerns which have been raised regarding suggestions to scrap the process altogether. The best thing we can do as a community to help address the current situation is shift our discussion away from analysis and debate, and refocus further discussion on implementation. As such, based on agreement during the latest TWG meeting, I'm closing this (i.e. the TWG discussion) with a pointer towards #2666584: [Community Initiative Proposal] Project Applications Process Revamp.

It should be noted that the proposal discussed in that issue is not necessarily the 'end game' with respect to the Project Applications queue ... rather it is seen as the "next step". The approach described there will allow us to clear out the current queue, shift the roadblock away from the front door of the new contributor experience, and allow us to gather actual statistics and metrics either in support or countering many of the concerns which have been raised during the course of this discussion.

kattekrab’s picture

Thanks @jthorson for your efforts in shepherding this discussion.

Yay for progress!