There are two key elements to the Drupal contribution credit system as it stands right now.

  • What activities we recognize
  • What weights we apply to those activities

Contribution Credit 1.0

While we don't reveal the exact weights used, we can talk about what category of weights are used.

The initial version of the contribution credit system had a system of weights based on the following:

  • Count of credited issues
    • A logarithmic scale of weight on issues, based on project usage - This has since been switched to a linear scale, with a lower floor.
  • Supporting partner status, scaled based on level of support
  • Membership status
  • Projects supported

Moving towards Contribution Credit 2.0

In this issue, we'd like to gather additional factors (and especially!) methods of weighting those factors, that could be included in the next generation of this credit algorithm.

Additional Factors

D.O Activities

  • Project releases made
  • Documentation edits
  • Translation strings
  • Having a maintainer role on a project
  • Forum posts (especially if we had a 'helpful' flag)

Non D.O Activities

Firstly - how would we measure these? Maybe a content type where folks submit a generic contribution of these types... with someone moderating?

  • Project leadership roles
  • Volunteer
  • Mentor
  • Working group member
  • Board member
  • Event organizer
  • Event sponsor
  • DrupalAnswers?
  • Etc

Weight and Multiplier ideas

Each measured contribution activity can be assigned a flat individual weight, in order to reward those activities with the greatest on the project in a proportionate way. In addition, multipliers (like the project usage one we already use) allow us to provide additional sensitivity in the system:

  • Some measure of effort or complexity? How?
  • Some measure of time commitment? (is this redundant with above?)
CommentFileSizeAuthor
#27 coder-installations-downloads.png16.98 KBhchonov
Support from Acquia helps fund testing for Drupal Acquia logo

Comments

hestenet created an issue. See original summary.

hestenet’s picture

Issue summary: View changes
hestenet’s picture

The comments on the previous contribution credit announcement likely have some suggestions we should gather here: https://www.drupal.org/drupalorg/blog/recognizing-more-types-of-contribu...

hestenet’s picture

Issue summary: View changes
Grayle’s picture

These ideas are off the top of my head, mostly for "patch" issues, probably terrible, possibly impossible to implement but maybe it'll spark someone's inspiration.

"Likes"

Now, hear me out. I don't want to turn this place into Facebook. But, each comment could have an upvote button.
It doesn't show how many people upvoted it, however. No notifications or anything, just used for weight modification.

Who pressed the button can further modify it. Which, yes, is problematic, but not doing it is also problematic. I'm just spitballing here.

- Someone from the same organization: less weight
- Someone new: less weight (?)
- Someone with a lot of credits: more weight (might be a bit circular?)
- Maintainer: more weight, unless the comment is another Maintainer?
- Someone who just likes stuff and that's it: far less weight
- Someone who's mentor to a few people outside their organization: more weight

Maybe various types of "upvote"? "Helpful", "I bet that took a while", etc.

Ask the Maintainer

When closing an issue, email a "survey" for that issue to the maintainers where they can select some options.

How hard was this issue? [easy, moderate, hard, jesus]
Will solving this issue help the module in a big way? [eh, sure, yup, hoo boy]
etc?

And their answers modify the weight of the credit. Again, their answers are private, for obvious reasons.

DamienMcKenna’s picture

How do community projects fit into this, given they will never have any usage stats?

As a general thing I don't think we want to add more red tape around the process via emails, surveys, new content types.

hestenet’s picture

We can potentially assign weights separately by type of project as well, so they (module, theme, community..) which is probably the best solution that doesn't require manual paperwork.

hestenet’s picture

When closing an issue, email a "survey" for that issue to the maintainers where they can select some options.

How hard was this issue? [easy, moderate, hard, jesus]
Will solving this issue help the module in a big way? [eh, sure, yup, hoo boy]
etc?

And their answers modify the weight of the credit. Again, their answers are private, for obvious reasons.

As a general thing I don't think we want to add more red tape around the process via emails, surveys, new content types.

I too, would like to make sure we keep paperwork to a minimum. That said, a maintainer already has to check all the boxes of the commenters on an issue to assign credit when closing. Maybe one more radio button for effort and/or impact isn't too much?

That said, a few folks have pinged me with some alternative ideas for measuring effort/complexity/impact - so hopefully they'll chime in soon.

mglaman’s picture

Curious on weights: stars were added to projects. Are stars taken into account? or is it just off of project usage stats? I know the weighting is secret.

drumm’s picture

The general algorithm and factors used is not a secret, it is in code at https://git.drupalcode.org/project/drupalorg/blob/79f1d09acbd9258eb0cd8e... the weighting configuration is not public.

Project stars are not considered. They are relatively new, and might not be a strong signal. I haven’t been paying attention to their use. 60,775 stars have been given to 10,447 projects, which is more than I was expecting. It could be something worth considering. It is also a lightweight action and we would have to think more closely about mitigating brigading.

hestenet credited jerdavis.

hestenet’s picture

Capturing additional feedback from @jerdavis

https://www.drupal.org/project/drupalorg/issues/3086867#comment-13294910

A thought I'd add to future algorithim considerations would be this:

1) Increase the value of contribution credit based on the number of contributions to the same project. A single commit to 100 projects should be worth significantly less than 100 commits to one project. That encourages people to actually maintain the projects they put on D.O. and not try to game the system through a shotgun approach.

2) Weight the number of contributions to single projects higher if that project is also a supported project by the organization. That encourages organizations to both contribute code and maintain that code. Even if the code is used by fewer people than Drupal Core, it doesn't need to be a top 10 or top 100 module for the effort to be valued.

hestenet’s picture

Another idea discussed in Drupal slack was:

  • Including some kind of multiplier on the projects supported factor
  • Considering whether the projects have security coverage or not
valic’s picture

I am reposting from different topic my suggestion:

When giving credits should be more than x comments on the thread, and definitely > 1 person involved.

I would definitely add the ecosystem and if the project has test coverage in the calculation (so the author of modules is more focused on that also). Activity as the number of comments could be deceptive, people would just starting issues for everything.

My draft would be :-D

a. The complexity of the project (1 - 100)
b. The number of installations.
c. The number of downloads.
d. Ecosystem (we don't use that for anything) - giving an extra point to have an ecosystem with 10 + modules, or similar.
e. Special point (activity, test coverage, coding standards) - something along with that, getting special point

So we would be able to do the following:
a * (b/c) + d + e

So examples (ignore my complexity of the project)

Commerce
70 * (60431 / 1124347) + 1 + 1 = 5.7

Google analytics
20 * (353500 / 6016816) + 1 = 2,17

Single date time picker (my module)
20 * (1322 / 26529) = 0,99

https://www.drupal.org/project/webmasters/issues/3083990#comment-13276438

valic’s picture

It would be great that maintainers have an option on a project to be excluded from the contribution credit system. And that is this visible on the project page,
and upon creating a new issue.

baddysonja’s picture

Here some thoughts re. non-code contribution, especially event organisation. I have organised many event of very different sizes:

  • Local meetup in the region - normally 2-4 hours, in the evening, takes maybe the same effort to organise and get people to come
  • 2 day Drupalcamp in the region. This was much more effort and I would say that the organisation of this was around 20 days of work in total per user (this depends though on the size of the orga team)
  • Drupal Europe - 1.000 people Drupal conference for the European community. I was in the core team and spent around 6 full months organising it

With the current system, I received 1 credit for each contribution. The contribution also doesn't get any weight, as this is a community project.

Organising events is important as they help to grow the local Drupal community. Still the effort behind organising the events is very different and should be treated differently. We also have different roles within the organisational committee, there is the main organiser team that is responsible (like maintainers of code projects) and then there are the volunteers that spend time on site or help out with certain tasks.
The larger the event is, the more responsibility is on the "maintainers" but for the volunteers it is not much difference if they help out in a small or a large event, both is very valuable.

In order to give more weight to maintainers / organisers of events, I suggest that we:

** Categorise events by size: X-Large (1000+), Large (500-1000), Medium 100-500), Small (20-100) and X-Small (1-20)
** Maintainers of the community project (the event), get more weight the larger the event is.
** Volunteers get the same weight on a credit, regardless of the size of the event.

This would recognise the effort of the main organisers (maintainers) of events. This could potentially work for other community projects as well, we just need to figure out how to categorize the projects.

DamienMcKenna’s picture

+1 to baddysonja's suggestion re community projects.

rszrama’s picture

The way we use the contribution credit system attaches, in some ways, financial value to things community members like us already did. With the launch of the credit system, you could essentially pay in sweat equity to be featured in parts of the site - investing your own time into making Drupal better and being rewarded with recognition in the directory of service providers.

When I first read Dries’s suggestion that credits be weighted by the number of users for any particular project, I immediately considered the impact on Centarro with respect to Drupal Commerce. While we contribute to core initiatives and a variety of other major contributed modules (Entity, Profile, Address), most of our contributions are to the Commerce project. Because our project is used by far fewer sites than core itself (only 5.8% of D7 / D8 sites use Commerce), our contributions are now devalued relative to contributions to core, even if the work involved in any one commit is more substantial in effort involved and number of users impacted (e.g. spending months refactoring code to support a native address book).

The recent weighting update correlates impact to the number of sites running a project whether or not a particular contribution actually impacts those end users. For example, someone might get a variety of core contribution credits by reviewing or commenting on minor issues that don’t materially impact many Drupal users (Aggregator help text refactoring? Color module preview JavaScript tweak?), but those credits will have an outsized impact on their ranking in the marketplace vs. commits improving features that every one of the 50,000+ Drupal Commerce users may benefit from.

I don’t have a specific alternative proposal at present, but this change negatively impacted our company's ranking (though let me be clear, I don't believe it will ultimately impact our business that much). Because it's near impossible to weight the impact of individual commits, I actually think my alternative proposal would be to not weight them. Having already made the decision to assign financial value to contributions, we should probably provide more public advance consideration of changes that impact contributors. As is, it effectively penalizes work that far fewer people are interested in or capable of doing but that is essential, in our case, to the tens of thousands of Drupal users who are directly dependent on our contributions.

(Sidebar: I do think focusing on finding more ways to measure the raw data, e.g. different types of contribution, is important; I'm not sure it's as important to fine tune a specific algorithm. This keeps d.o focused on the data, not its interpretation.)

kristiaanvandeneynde’s picture

I'd like the system to take effort into account somehow.

While all contributions are valuable, I don't think it's fair that someone who runs a script on all modules to change t() to $this->t() and then generates 100 issues for that gets 100 times more recognition than someone who spends almost a year rewriting a crucial part of core, albeit in a single issue.

michaellander’s picture

With regard to how contributions are weighted for code-based issues, I'm definitely concerned about the usage based weighting. I can only speak on behalf of myself, but having spent thousands of hours on a module that does not yet have significant usage, it's a bit disheartening to receive less credit because the idea is more aspirational and not yet widely adopted. It also makes it more difficult to provide a business case as to why my time should be spent on aspirational projects when the ROI is less. Widely used projects already have more eyes on them, naturally leading to more issues/patches, should we be encouraging even more attention to them? It's challenging because we want to balance improving what we have but also cultivating new ideas.

I realize we are battling those who are trying to game the system, but ultimately we want to encourage efforts that push Drupal forward, regardless of intention. I think quantifying the value of what's within a patch is extremely difficult(if not impossible), but I'll throw some ideas out there anyways:

A few ideas to promote

  • Patches that provide test coverage
  • Patches that provide better commenting(multiple line descriptions and examples)
  • Patches with larger added and remove blocks of code(removed 10 lines, replaced with 6 lines vs removed 1, replaced with 1)
  • Patches for projects with larger code bases
  • Patches to projects over a usage threshold(s), instead of something more dynamic.

I'll also add, I really like the idea of factoring in commitment to single projects and potentially project stability/security coverage. Promoting those who are investing time to understanding modules and systems deeper seems like something worth pushing.

Wim Leers’s picture

baddysonja’s picture

A small addition to #16. We just finished organising the Splash Awards in Amsterdam: https://splashawards.org/
We were 5 core organisers (maintainers) and we have been working on this event since the end of March.

200 people attended, 75 case studies were submitted, an international jury organised, awards produced and new brand guideline for future organisation of the event.

March, April, Mai: I would say we spent on average 2 hours per week: 12 weeks x 2 hours x 5 members = 120 hours.
June-July: We did a bi-weekly call and probably spent around 4 hours per week on average: 8 weeks x 4 hours x 5 members = 160 hours
August, Sept, Oct: We did a weekly call and on average spent 4 hours per week: 12 weeks x 4 hours x 5 members = 240 hours.
The last two weeks, each one of us probably spent at least extra 10 hours per week: 2 weeks x 10 hours x 5 members = 100 hours
-> TOTAL: 620 hours or 78 days (16 days per member)
Based on this calculation, it took each maintainer around 16 days in a time period of 7-8 months to get this done. Total 78 days.
Getting 1 credit for this work is not a very good measure of the work that the individuals spend on organising the event.

Using my example:
This event would be considered as a Medium event and could give the organisers in total 50 credits that would be divided by the 5 maintainers (each would get 10 credits). Just a thought.

mirom’s picture

I like points raised in #18. I think we should also keep in mind, how to motivate people from smaller communities to contribute. Imagine that if I want to sell Commerce in Slovakia, I need to develop integrations to local payment gateways. Those gateways will be never used by thousands of sites, but they are necessary for Commerce to be even considered as an option by potential customers. Same goes for the events - we will never have 500 people on our events, but having Drupal Day or DrupalCamp is necessary for Drupal to survive in the country. TBH I don't have solution for this, because I recognize that Core, Commerce or International Splash Awards requires more effort. Could we maybe introduce some concept of regionality?
I also think that we need to change way how companies are ordered in marketplace, because it quite sucks if some international company ranks before all regional companies just because they "serve" all countries in the world.

rachel_norfolk’s picture

As Baddy mentioned in #16, recognising the contribution to the project of events is vital.

I would like us to consider not simply the size of an event as a measure but the impact the event has upon the project. For example, a small event that intentionally works to introduce new people to the Drupal Project, through "intro classes" etc, has more positive impact on the project than a thousand attendee event that makes no such effort.

We will need to come up with agreed "event features" that have positive impact, and then recognise their implementations accordingly.

rachel_norfolk’s picture

To be honest, the same principles apply to code contribution - it's not effort that we should be looking to reward, it's impact.

(and, possibly controversially, I might even consider translating t() to $this->t() as having positive impact)

michaellander’s picture

I agree @rachel_norfolk, but I’d also say not all impact is immediate or obvious. How do we still encourage inventors and creators. Maybe the solution for that isn’t in the credit system. I’m not really sure.

hchonov’s picture

There is a module where the number of the installations isn't nearly close to the number of usages and I think all we can clearly agree on that regarding this module - https://www.drupal.org/project/coder
The ratio # of downloads : # of installs is also much higher than other modules.

mikelutz’s picture

That's because coder isn't a d8 module, it's a vendor library.
In D7 it's a module, managed through upgrade status, and the 1389 reported usages are from d7/d6

In D8, it's a vendor library hosted in the D8 packagist at drupal/code, but is not a drupal module, and not reported through upgrade status. Instead, it's a dev dependency of Drupal itself, installed along with all dev packages when composer is installed.

Which isn't to say that contributions to coder shouldn't be weighted at the same level as core; they should be. I'm only saying that coder is a special case, and that generally the usage numbers from upgrade status are reasonably correct relative from module to module as a gauge of relative usage. Whether special handling for this one odd duck is warranted is certainly up for debate.

gngn’s picture

I very much like the idea of a 'helpful' flag.
It would make it easier to dive into a long issue discussion (and could be used for contribution credit).

colan’s picture

stefan.korn’s picture

Now that the changes were made in #3086867: Modify algorithm for weighting project credits by usage, "projects supported", "case studies" and "membership" are gaining much more relative weight. While it maybe fair to try to avoid gaming the system with issue credits, I am not sure if it is the right way by (maybe accidentally) promoting the other weights. "Projects supported" and "case studies" are "everlasting" weights and "membership" is a "bought" weight. None of them is really indicating active participation, like the issue credits do (especially since issue credits are timely limited).

In a way this change is (probably accidentally) helping the big ones to stay in a better position, without needing to actively participate.

Since issue credits are already a non-permanent weight, I would give them at least higher relative weight compared to the others. I think it would be fair to measure "projects supported" by project usage too, which is not done at the moment I think.

And regarding the proposed future non-code (event) contribution weighting, I think this can be an open door for gaming. Imho event contribution rewards should be handled somewhere else, maybe the Drupal Association should directly honor this in some way. Or maybe event contribution should be handled like case studies and maybe to avoid gaming events "approved" by Drupal Association will get a higher weight. I would not do this with the regular issue credit system.

Regarding improving issue credits weighting I would propose something like story points on issues, maybe even automatically calculated, for example on issue followers and issue participants (higher number of issue followers does indicate high impact, higher number of participants does indicate more complexity). And maybe issue credits could get shared between the credited ones, avoiding situations where each comment on an issue result in a full issue credit.

C-Logemann’s picture

With the new "algorithm for weighting project credits by usage" the non-code issue credits on non-code projects have now a lower weight as any small module becausee this projects cannot have have any usage. Even when we get more issue credits for organizing bigger events as suggested by @baddysonja community organizing issue credits will only a low weight. So we maybe need a kind of non-code project value which will be similar counted as project usage.

xjm’s picture

  1. Really glad to see "project releases made" on the list. For core, there's an enormous and mostly invisible investment into creating new releases.

    I think release nodes themselves should have a crediting widget, because while the person who actually pushes the tag and creates the release node has to spend time on that, there's far more people involved in the process behind the scenes. (In the interim, as a workaround, maybe we should start creating issues in the core queue that credit all the people who help with the release notes, process, etc.)

    My one concern would be that we don't want to incentivize spam releases in contrib. Doing the work to put together a new stable release of a module is great, important work. And an occasional hotfix release is necessary and extremely important if e.g. there's a critical regression that breaks sites. But pushing a bunch of useless, mostly empty releases would be gaming with a strong negative impact on sites using those projects.

    The usage weighting should help with that some, but we'd also want to think about ensuring the releases are different from the previous ones and flag projects somehow that repeatedly create new releases close together with only a handful of commits. Maybe weighting by the number of changes since the last release. For the most part (aside from urgent hotfixes), the more changes are in a release, the more difficult it is to actually ship it.
     

  2. Something I'd like to see also is weighting security work more heavily. Currently, contributions to a security advisory are credited as normal issues for the project. (Which is a great improvement over a couple years ago.) However, contributing to a security advisory for core or contrib is a lot more work than a typical issue, because of all the additional requirements for the work. There's also a low risk of gaming since SAs are well governed by the Security Team. I'd recommend SAs be weighted more than the average commit.
     

  3. We need to do something better about crediting "zero install but useful" projects. I love that there's now an event organizing working group, and I wonder if there's some way to scale that into governance for event credit. I really like Baddy's suggestions in #16 as a way of scaling event recognition; we just need a scalable way of vetting than an event is real.

    There's other "zero usage but useful" projects out there as well, like DDI and... err, well, drush (because it's a CLI tool and also not on d.o; maybe GitLab can help bring them back?). Edit: Also coder, as mentioned above.
     

  4. In terms of crediting mentoring, we already try to promote a practice where mentors at events comment on the issues that they helped mentor, in order to receive credit. However, there's a lot of additional, uncredited planning work that goes on behind the scenes: triaging issues, managing communications, recruiting and training mentors, etc. We post some issues related to this in the issue queue, but again have the zero install thing.

    I think mentoring, volunteering, sponsoring, and organizing for an event could all be credited to event "projects". The scaling for event size makes sense because (e.g.) mentoring a DrupalCon is more challenging and exhausting than mentoring at a smaller event like a local camp, and also higher impact, because you're reaching more people. (And while smaller developer summits actually tend to have the highest impact on development overall, my experience is that when I mentor at such events I usually have the opportunity to actually post my own feedback on the issues and receive credit that way based on the impact of a contribution to that issue. So it's credited in two ways then if there's also an event project, and IMO that additive effect is actually correct here.)
     

  5. For events, what about speaking? That doesn't seem to be mentioned on this issue yet. Preparing a good session takes a lot of time and also can have a really high impact. I've spent ~100 hours or more on each keynote I've given, and that simple 45-60 min on stage reaches a lot of people in a direct way. (Much more rewardingly than the somewhat thankless task of pushing out a release, but it's still not credited currently.)
     

  6. Finally, something I think is missing from the IS: We need to incentivize contribution, including novice contribution, while disincentivizing gaming. Low-value, gamed contributions are a bit of a problem in core, but we manage them with:

    • A team of committers who evaluate each contribution
    • A clear policy about what receives credit
    • A process of assuming good intentions, but still not crediting unhelpful contributions, and documenting why an unhelpful contribution could receive credit next time (in order to make sure we don't push away new contributors along with gamers).

    However, in contrib, the problem is much bigger, maintainers aren't always able to take on the emotional labor of evaluating and responding to each contribution, and we have seen a lot of trivial, automated patches or unhelpful issue attachments.

    Edit: I meant to mention that Drupal 9 compatibility and other improvements made with automated tools are actually hugely important (re: the t() to $this->t, and should receive credit. The problem to solve is how an organization that does nothing but running automated tools can suddenly overwhelm other orgs in the marketplace in a way that a gut check says doesn't seem quite fair. The difficulty is that any metric for determining the complexity of a contribution (duration of issue, number of comments, issue priority, etc.) will also end up in the line of fire for gaming. Helping fix a critical is huge, but the last thing I want is sudden incentive to artificially inflate issue priority or post spam comments on criticals.

xjm’s picture

Forgot something else... the issues for Drush and Coder (etc.) reminded me of the fact that all our usage stats currently rely on having the update module enabled, which is not a best practice and excludes any tool not used on a production website. Our weighting is only good as our telemetry, which is why the telemetry initiative is also important for improving the credit system.

rachel_norfolk’s picture

It seems that good understanding on how Drupal is used is essential to the sustainability of the project.

What would we need to change about the update module so that is *is* good practice to have it enabled on production sites? After all, really, that's the only data worth reading...

Wim Leers’s picture

What would we need to change about the update module so that is *is* good practice to have it enabled on production sites?

I don't see a way. There's opposing needs here:

  • The Drupal project would like from accurate site count data, i.e. it wants Drupal sites talking back.
  • An individual Drupal site would like to minimize risks, i.e. it wants to minimize dependencies, which includes talking to servers somewhere on the internet (= drupal.org). An individual site can track updates by using composer.

Maybe somebody else does see a way though! 🤞😊

rachel_norfolk’s picture

If a project cannot effectively communicate with its users and properly understand their environments, then there is a risk that a project will not be around in the future. That’s a risk worth considering, I think.

Wim Leers’s picture

I completely agree with you :)

nod_’s picture

Sorry if I missed part of the discussion, been under a rock for the last 2 years basically. But to me it seems there is confusion between credit and ranking. what I understand:

  • credit: you are a human being, we recognize the work you put in and should be celebrated for that
  • ranking: you are a legal entity providing services and your place in the marketplace depends on you helping Drupal and the DA with it's priorities.

From the issue summary:

There are two key elements to the Drupal contribution credit system as it stands right now.

  • What activities we recognize
  • What weights we apply to those activities

I disagree with that. The credit system is 1), only. And 2) should not be directly associated with 1) and it should be called "ranking algorithm".

There has been discussion about how to recognize event organizing and that's great, but it's always in the context of "how does it affect the ranking". For me regardless of the ranking it's good to find a way to credit people for event organizing (and once the data is there and properly qualified, we can use it for something else).

But if we ties the two together I'm afraid we won't progress on the giving credit part until we know how it will be used for ranking. It all comes from the fact that we order companies from first to last in a list and that is what drives the whole thing, but i fear we're starting to go somewhere else with the whole "credit" thing.

So yeah it's just words but if our "credit system" has the ranking built in, I'm not on board. Could we change the title of this issue ?

DamienMcKenna’s picture

Per nod_'s suggestion, should we split this into two issues, one about expanding what can be given issue credit, and another about the algorithm used on the marketplace page?

hestenet’s picture

I support the issue split! I think @nod_'s comment is a very valid nuance that was in the back of my mind, but not expressed out loud quite so clearly in this issue before.

nod_’s picture

to add a bit more to this, getting away from the ranking things, we should define what "areas" of drupal contribution we recognize (and I'm sure that list already exists somewhere) such as:

  • security: the security team obviously, but also the community working group
  • development: code, reviews, designs, modules maintenance, etc.
  • awareness: events, marketing, sponsorship
  • accessibility: accessibility work (as in code things, screen readers and such), but also documentation, trainings, work on improving beginners experience

With that we can first ranks the subjects based on what we want company with money and time to focus on. And I would argue that we shouldn't weight things inside the topics too much (as in resolving a community issue vs. resolving a code security issue). What I know is that time spend is not a good indicator of how much something should contribute towards ranking, it'll be enshrining what already happens in open-source at large and at this point we might be better off trowing this away altogether.

So a couple of things to get back on the ranking topic:

  1. What does it mean to be 1st in the marketplace listing? is it the "best" company, is the the one that contribute the most? what do people who use and browse those pages think, regardless of what it was intended to be? what's the traffic like on these pages actually?
  2. What can we reasonably expect a company to provide regarding ranking? what can they actually do to improve their ranking? Giving $$ is an easy one but not really fair, what % of ressources they put on giving back to the community might be a better one, I don't know.

We've seen lately that company giving time to their developers to contribute is not enough, there needs to be some amount of training first otherwise it's just externalizing those developers training and that's kinda bad, as in penalizing the raking bad => so we should have paid training available through the DA to companies that want to send more than 1 person in the issue queue. While keeping all the mentoring efforts we already have to help individuals (and move the money from one to the other possibly).

This is a company ranking issue, so if we figure out: what a company can do and how do we want them to be involved with Drupal, we'll have the meat of the algorithm. Using user contribution as a proxy to company "status" is a slippery slope.

nod_’s picture

This got my brain going...

One thing I experienced and see is around engagement. Sometimes a company is fully invested in the community, such as a acquia (and many others), other times a company is involved only because individuals are involved but they have little to no support from that company to do this work (like ovhcloud where I used to work). Contribution from people of both company is equally valuable. But the company that is fully invested is more valuable to the community. Executive level commitment should be taken into account for example.

Comments like #16 are totally understandable, but if we continue this way it's not companies we rank but actual people. Also it's a sea of bikesheds to paint that way... (to some extend we've already been there with certifiedtorock)
To see this another way: organizing a 5000 ppl event and a 10 ppl event, each get "1" credit. The 5k ppl event towards the "development" topic and the 10ppl event towards "awareness" topic. One require more time skills involvement than the other, yes of course, and with the appropriate contextualization in the profile page, humans reading will realize the effort it took to get this credit and have respect for the person. At least I think that at the individual level respect is what people want from credits.

To circle back, organizing a 5k ppl event is probably not possible at a company that is not fully invested in the community. 1 credit each, weighted by company involvement, ranked by topic priority. At the people level we don't judge what's a "better" contribution and we get a ranking out of it by taking company-level information and Drupal/DA priorities.

Because our telemetry for modules is not great we could assign modules to "topics" (we already have tags on modules, would help raise the quality of tagging too) and use that to weight contributions with some of the suggestion around repeat credits and other things, we might not need the install data at all

nod_’s picture

Title: Contribution Credit Algorithm: Weights & Measures » Marketplace ranking Algorithm: Weights & Measures

And since hestenet agrees. Changing title.

nod_’s picture

Came across this attribution framework https://casrai.org/credit/ used in academia, in an article in nature discussing the topic: Open source ecosystems need equitable credit across contributions. There is also the all contributor tool: https://allcontributors.org (with their categorization https://allcontributors.org/docs/en/emoji-key).

(…) to date, no standard and comprehensive contribution acknowledgement system exists in open source, not just for software development but for the broader ecosystems of conferences, organization and outreach efforts, and technical knowledge. (…)

(…) If CRediT can teach us anything, it is that standards should emerge from the community, undergo many iterations and rounds of feedback, and receive buy-in from major relevant institutions and involved parties. The CRediT taxonomy resulted from a long categorization effort and is a prime example of a working contributor taxonomy. (…) A successful taxonomy for open source should develop through a similar community peer review. Just as academic institutions and publishers are embracing the CRediT model, open source contributions need the same attention.

Are there any news from the working group on this topic?

Kristen Pol’s picture

For "Non D.O Activities", one reason I created:

https://www.drupal.org/project/contribution_events

was to give contribution events without D.O projects to have a place to add issues and provide credit to sponsors, organizers, mentors, contributors, etc.

Perhaps something similar could happen for other activities? Either grouped by type of thing like this one or a more general bucket?

The issue is being able to add the credits. You have to be a maintainer to do that.

UPDATE: I read the issue summary to fast so glossed over:

"Maybe a content type where folks submit a generic contribution of these types"

Yes, it would need to be something simpler than we have now for this so anyone could potentially request for credits for contributions done off of D.O.

drumm’s picture

We have some work to to on events. Events posted at https://www.drupal.org/community/events already have good structured data for volunteers, speakers, sponsors, and more. We should highlight those on user pages and organization pages, so event organizers aren’t spending extra time making issues.

Kristen Pol’s picture

@drumm Yes, that would be great. Was talking with @leslieg today and adding Mentors to the list would be good. But maybe also Contributors would need to be added? That's different than Volunteers IMO. The only downside is only the one person who created the event can edit it and add these. Group editing on this event information would be amazing.

rubyji’s picture

This isn't about the algorithm, so please redirect me if there's a better issue or project to post this to. I want to suggest that it would be helpful to also be able to filter agencies by those led by people of color and people who aren't men.

hestenet’s picture

Thanks @rubyjj - I think that something like a 'black owned businesses' filter - but generic to more relevant demographics would be really cool.

guptahemant’s picture

I did a quick glance over this issue and overall its a complex topic to solve,
One of the key ideas which was suggested at the start to add a field with issue credits by maintainer which points how difficult is the issue like easy, medium, hard. On similar ground i would like to suggest usage of story points on each issue, basically a complexity number being added by maintainer at the time of giving credits. For a very simple issue, this number can be 1 or 2, for mid difficulty issues this number can be 3,5,8 for a complex issue this number can be 13, 21 etc.

Now with this approach another problem can come is if each project gets this feature then this system can easily be gamified, So for that another level of restriction can be added like which projects gets to have this feature, It could depend upon number of usage above a particular threshold, which seems fair since a highly used project should get an incentive for contribution.

This approach can also solve the recognition regarding efforts involved in contribution events, i.e an event of 5k people can have higher complexity number assigned as compared to an event of 10 to 20 people.

mherchel’s picture

nod_’s picture

reposting my question from 2 years ago in #45: "Are there any news from the working group on this topic?"

Since I do not like the turn the discussion is taking I'd like to suggest a different way of ranking. The problem with the current solution is that companies are ranked based on the result of the contribution of their employee. And we're "discovering" that the way to earn those credit is very important to us and we don't want companies to encourage gaming the issue credit. This method is creating tension between maintainers and contributors, and maintainers end up policing contribution. This is not healthy. Especially since the company this is for is nowhere to be found in the process. An individual employee can burn out trying to get issue credit for it's employer without the employer caring.

If we're ranking companies, we need company level metrics and I'd like to propose one way of scoring. What we want from companies is mainly: money for the DA, full time contributors. So let's rank them on this.

Base algo

metric Base score description
Company is a DA sponsor 100 000 we could adjust and give 10000 points per sponsorship level maybe? or just no additional points because it's not fair to rank based on the money spent?
Contribution pledge 50 000 A public post from the company saying they're sponsoring Drupal, and how. Show executive-level buy-in. Useful for later.
module sponsored 100 * weight based on usage Encourage companies to pay for maintaining modules
Contribution time % score, see below

Contribution time score

On each user profile, we add a new field so every individual can put the real amount of time attributed to contribution. And the scoring goes a bit like this:

Situation coef
From 100% to 80% (included) *10
From 80% (excluded) to 60% (included) *5
Below 60% (excluded) *1
Below 40% (included) and user created less than 4 years ago *2

The users counted must be "active" (to be defined) and have at least one active "role" on their profile. To prevent companies from creating fake accounts with high contribution time %.

Few examples:

  1. If a company employ 2 person at 90% (one core dev and one event organizer), 5 persons at 60%, and 10 at 20% the score will be:
    2*90 *10 + 5*60 *5 + 10*20 *1 = 3500
  2. A company employ 10 full time contributors: 10*100 *10 = 10 000
  3. A company employ 1 70%, 5 20%, and 3 30% "new": 70*5 + 5*20 + 3*30*2 = 630

The "boost" for new contributors is to incentivize companies to make new folks contribute (and we cap at 40% because it's not the community job to do the training for new folks hired at a commercial company) and over 4 years because if there is job hoping it can still benefit different companies and by the end of the 4 years hopefully the person is attached to their profile and won't create one just for the bonus since all their contributions so far will not be transfered. Maybe it's naive, I just wanted something to help get new people contributing.

We could also take into account the "roles" a user has but it's tricky because all the roles are not necessarily sponsored by the company. We can just count people that have at least one current role active.

Formula

And the score would be :
DA sponsorship + contribution pledge + module sponsored + contribution time score

The principle is that whatever you do in one layer it's impossible to "catch-up" with another layer. No matter how many full time contributors you hire, it won't count as much as giving money to the DA.

Additional information needed

  • A new link field in org profile to link to the pledge.
  • A new "contribution time %" field in the user profile (from 0 to 100). The actual value is hidden from the public profile to avoid companies policing this. It's up to the person to say how much time they actually have towards contribution, regardless of what is said in the company pledge. (for example pledge says everyone gets 40% contrib time, in reality it's more like 10 people get 30%)

This way we simply make all the discussions about "good" and "bad" contributions disapear (as they should). and we make the ranking based on the ressources a company gives to the community, regardless of the outcome. This also solves the issue of code/non-code contribution. An employee that is full time on event organizing "weight" the same as a full time code core contributor for the ranking.

Expected outcome

With a system like this, there is real incentive to give money to the DA and employ full time contributors. If there is not more sponsorship or more full time contributors then the ranking is most likely useless to companies and it's only employees that care about this, and we can put a stop to the internal tensions over credit.

  • The pledge can help employee request contribution time from the company, and because the allocation % is controlled by the user (and not the company) it can always be changed to impact the company rating if the pledge is not followed.
  • This puts a stop at the unhealthy direction of making maintainers "contribution credit cops".
  • The type of contribution is not important, a full time event organizer counts the same as a full time core committer. Yay equality.
  • This will need people from the DA to be able to verify a few things as gaming that system is also possible, but with the additional money coming in from companies that want to be ranked higher that should be possible.
  • This makes contribution credit a more personal thing where you're recognized for the work you did in the community. And maybe everyone involved on a issue should get credit and maintainers uncheck only the very unhelpful folks and let the rest slide because it doesn't have the same importance as before.

Anyway I don't like linking a company rank to individual contribution credit, making maintainers cops, and letting companies off the hook for their reckless ways of trying to get credit for the sake of their rank while not doing things that we actually need to keep the community healthy.

jurgenhaas’s picture

Thank you so much @nod_ for writing this up, it reflects almost exactly what I'm juggling with in my mind for months as well and discussed with some folks recently. I'd like to phrase it like this:

What an individual community member can contribute is time. The outcome of x hours from A is different to the outcome of x hours from B. But both are willing to give the same in terms of the fraction of their lifetime. We should not rank them by the value of the outcome to someone else. Isn't that also a DDI principle? Some people have more efficiency than others, or the commercial value of their contribution might be higher because there is more demand for that piece and not the other. But since both contributed an hour, their "credit" should be the same.

We experience that in larger projects all the time: when the project got completed and launched, it was only possible because of each individual contributor. Yes, some are working on maybe simpler tasks and others on more complex ones. What matters is their spent time, not the value of the result to someone else.

Don't know if and how contribution hours could be measured, but that could help to calculate "credits" for individuals objectively and all bias regarding type of contribution would be gone.

However, what still worries me, if it comes to the company level ranking on the marketplace, that smaller teams won't have any chance for exposure that may reflect their contribution to the project as such. Isn't that another bias that we should be careful about? I mean, nothing wrong with the bigger corps. In fact, it's great they are around and about, the project wouldn't be in such good shape without their employees AND their cash. But nor would the project be at this junction without all the smaller ones.

catch’s picture

The pledge can help employee request contribution time from the company, and because the allocation % is controlled by the user (and not the company) it can always be changed to impact the company rating if the pledge is not followed.

One likely issue with this:

Some companies have people work on essentially proprietary software that never makes its way onto Drupal.org for most of their time, then they get say 20% contribution time to work on 'whatever they want' - and that 20% time may or may not actually happen.

However there's also contributions where client (or product) work is developed as much as possible 'in the open' and this ends up on Drupal.org. This could be maintaining projects, but it could also be patches against contrib modules used on a project etc.

The first type of contribution it's very easy to quantify the amount of sponsorship time. The second kind, it's just not very easy to quantify - could vary widely between what kinds of client projects people are working on at any one time. However a vast amount of bugfixing comes from the second kind of contribution, and this is often not emphasised.

I also think if unscrupulous companies are pressuring employees to farm credit (in ways that aren't actually benefiting either the project or the developers), they could also pressure people to exaggerate percentages on a Drupal.org profile (or the boss registers and sets themselves at 100%). Apart from that, given that different countries have different standard working hours, would suggest a raw hours figure rather than a percentage of time. If you're doing 50% contribution time but you work 80 hours a week, then you're actually doing one full time job and 40 hours unpaid overtime. Using a raw number of hours instead of a percentage doesn't directly address this but it does at least mean time is calculated the same way.

Maybe some kind of hybrid would work - i.e. add this on top of the existing ranking system and change some weightings?

Kristen Pol’s picture

Wow, amazing discussion and suggestions! I really like where this is going and agree on the main points:

1. Reward sponsorship
2. Reward time spent regardless of type

I am concerned as well that small companies are penalized by this, though they are also penalized now.

Could it be weighted by company size or revenue?

I also think that hours vs percentage would be better for many of us that work overtime.

nod_’s picture

I considered raw hours, but i like using a percent of paid hours for a few reasons:

In the scenario of "If you're doing 50% contribution time but you work 80 hours a week, then you're actually doing one full time job and 40 hours unpaid overtime" depend how many hours you're paid. if you're paid only for 40 hours then it's a 0% contribution time from your employer. You're still getting credit and all that but your employer is rewarded by the amount they're helping you contribute, which is 0%.

"I also think that hours vs percentage would be better for many of us that work overtime." is the overtime paid? if is it not paid why do you want the company to get credit for it? they're already paying you less than what you earn them (by definition) so why give even more?

Working with percent means rewarding sustainable involvement. In both situation if you're doing 80 hours a week, or regular overtime, to work and contribute, how sustainable is it really? are you going to have the patience to work with the community in a positive way with those hours? probably not.

As for how to choose the percent if you can't say how long you work on each week, you can think about monthly. You spend a whole day (8 or 7 hours) over the course of a month doing contribution? that's 5% (with an average of 20 working days a month). You can get 3 days every quarter? => 5%.

Using raw numbers might be counter productive since people will just try to "do their time" regardless of their actual availability and yet again people might compensate for the lack of company support silently. We need to put the responsibility for the ranking back on companies, not on individuals.

As for employer forcing people to fill in a certain value or a boss putting themselves at a 100%, that's where the checking comes in. If that new ranking brings more money to the DA, it can be used to pay for someone to checks things out from time to time. As far as I know marketplace helps the DA get company engagement/money, it's up to the DA to regulate this, not maintainers through contribution policing.

catch’s picture

In the scenario of "If you're doing 50% contribution time but you work 80 hours a week, then you're actually doing one full time job and 40 hours unpaid overtime" depend how many hours you're paid. if you're paid only for 40 hours then it's a 0% contribution time from your employer. You're still getting credit and all that but your employer is rewarded by the amount they're helping you contribute, which is 0%.

OK this is a good point, raw hours potentially makes it worse than a percentage. Objection withdrawn.

One other question then - I currently get sponsored 20 hours per week to exclusively work on Drupal core by one company. I don't do any other work for them, so I think by the way you're calculating this, I would put 100% in the profile (i.e. 100% of that job is paid contribution time even though it's not considered full time hours).

This means a company sponsoring two people per week at 20 hours/week each would then get 2 * 100%, but a company sponsoring one person 40 hours per week would get 1 * 100%, - however, I don't think this is necessarily bad, especially given all the other trade-offs.

Kristen Pol’s picture

Yeah, I'm confused by the percentage... lately I'm about 15 out of 20 hours sponsored... so 75%? In the past it was about 15 out of 60 so 25? For me, putting hours would be way simpler as my work schedule varies considerably.

I think the percentage idea comes from the notion of salaried workers are going to give "free labor" if working more than 40* hours. Many of us aren't salaried and are paid by the hour, so using hours is more appropriate than percentage. I understand why a salaried person might be seen as working for "free", but I know where even salaried workers are paid overtime, so they are compensated for the extra work.

(*40 hours in the US but this is different elsewhere... e.g. 38 in Australia)

nod_’s picture

Freelance are interesting :)

I currently get sponsored 20 hours per week to exclusively work on Drupal core by one company.

If you have 20 billable in your work week then yes, it's 100%. If you have 40 hours of billable hours a week, then it's 50%. the percent is relative the person's "work capacity" (aka. paid hours).

Yeah, I'm confused by the percentage... lately I'm about 15 out of 20 hours sponsored... so 75%? In the past it was about 15 out of 60 so 25? For me, putting hours would be way simpler as my work schedule varies considerably.

In that situation regardless of your actual schedule you have a target number of billable hours you want to achieve per week right? Let's say you need 40 hours to make a decent living. Out of those 40 hours, a company sponsors 20 hours, so your work capacity is used at 50% by contribution. Now if for a given week you're working 60 hours because there is a deadline, you need the money, or something else. In that case, it's still 50% because your target number of billable hours is still 40. I'd hate for d.o to become a timesheet where you put down your hours every week so I'd like to say away from "real hours" as much as possible.

We want contribution to be sustainable and make sure people do not burn out. It's a matter of not going past your work capacity and keeping the work/life balance reasonable, or at least try to give incentives towards that. For employees we can highlight this by saying that going past your work capacity is essentially giving "free labor", doesn't work for freelances but the core idea is the same for both.

The idea is that we want as many people spending at least 60% of their time contributing. People that at the beginning of the week think about what they're going to do in the community first before what they need to do for their clients. It doesn't matter if they contribute 23.5% one week or 33% the next, it's still not enough for us to make sure as many patches as possible gets reviewed, things are followed up upon, etc.

Kristen Pol’s picture

To complicate matters, some of us are sponsored by multiple organizations and/or work at multiple organizations, e.g.

company 1 - sponsored 15 hours/week, non-contribution work 10 hours/week
company 2 - sponsored 5 hours/week, non-contribution work 20 hours/week
company 3 - sponsored 1 hour/month, no non-contribution work

And there are companies that we work for but on behalf or another (e.g. client) so that's a whole other ball of wax. The "behalf of" list can be quite large.

This is fun, right? :)

nod_’s picture

Technically i guess it could be expressed but it'd be a mess of a UI 😛

From my point of view (assuming your target billable hours are 50/week) in the situation above company 1 gets 30% company 2 and 3 nothing. Would be nice if companies would compete for sponsoring an individual? Probably wishful thinking... We still have commit credits and that still show up in org profiles. It just doesn't count towards ranking.

We could talk all day about this, I'd like some feedback on the 4 levels, the fact that one level can't be used to "catch-up" to another level, that kind of things too. Easy to get lost in the details and miss the big picture 🙂

tedfordgif’s picture

The factors/levels you've proposed seem decent. Gaming the system by creating a bunch of "sponsored" modules of dubious value is possible, but would be pretty easy to police. You could also consider non-linear factors based on the module usage numbers and maintenance status.

My biggest concern is that any system of metrics or rankings is only as valuable as the trust it is built on. It's important to see the "raw" rankings based on your proposed metrics, but I would also like to understand the "corporate graph", i.e. the trust between companies. If I trust a company's contributions to Drupal, I'm more likely to trust the companies they trust. Of course, that can be gamed, too, and might favor the larger companies.

Maybe another way to say it is: even if you publish the algorithm, somebody will always want to look at the data in a slightly different way. Is it worth thinking about pushing the algorithm closer to the user, or giving them the knobs to control the algorithm?

This is probably a much larger discussion and nearly off-topic, so please don't let it derail the immediate goal.

Sam152’s picture

Short of dropping the credit system entirely, why not do this step, proposed by @nod_

Is the company a DA sponsor? simple yes/no, doesn’t matter how much money you’re supporting the DA with because for some businesses $1000 is a lot and for some $25000 is nothing.

...and then randomise the order of the marketplace. If spam drops off as a result, I imagine the increased signal to noise ratio in the queues would benefit the project overall.

I would also suggest that any ranking based on telemetry which is trivially spoofed (downloads, installs) is also probably a pretty bad idea.

C-Logemann’s picture

I like the issue credit system and the new ways to show contributions on our individual and institutional profiles. And I see lots of good arguments for bringing more fairness to the market place. But I believe the increasing complexity in the process of getting more fairness results in less fairness. I think it's similar to the complexity of tax rules I know from Germany but I think all modern democracies have the same problem. When somebody is pointing on a tax rule which is not fair for a special situation this often ends in additional rules for trying to create more fairness. But at the end only rich people can pay the experts to profit from the rules which makes them more rich. On the other side all people have paid the government and the courts to make this happen. So here is something similar going on. Good people contribute lots of time to get more fairness to the marketplace ranking. So I like the idea of @Sam152 for just switching to random lists on market place. In any case we should not return to alphabetical order.
For the sponsorship part of the ranking we can create extra sponsor lists etc. beside the random marketplace.

japerry’s picture

I do not believe users should indicate how much time is being contributed to drupal.org. For the companies that are gaming the system, they'll ask employees to put 100%, just like retailers ask customers in feedback surveys to put a 10/10 for service otherwise they get a failing grade. It's way too easy to abuse and it could be difficult to systemically detect.

The other issue here, even if users are being honest about percentages-- they must self-select, and de-select companies they work(ed) for. For example, Acquia currently shows 756 people on drupal.org, even though more than half of those users no longer work there. And the only way to unselect users is by having a drupalorg admin (not even a site moderator can do this) remove those users from companies.

Regarding modules, I do believe this could be more helpful in determining ranking, because there are more gates to gaining legitimate credit. For example, eligible modules could need:
* Opt into security coverage
* Have a stable release
* Have active maintenance status
* Credit is multiplied by usage
* Degrade over time as the last commit or release ages
* Extra points for maintainers who are linked to the company
This would make it relatively hard to game, and if companies do game it, it's fairly easy to audit.

One of the issues as a maintainer that annoys me personally, is the posting of automated changes (phpcs, GPLv2+ deprecations, etc). If a user posts one of these issues in the course of otherwise normal work (ie: they have a bunch of other issues that are more complex), fine. But I see a bunch of users who basically spam all of the issues they can find so their user page can look like they've made considerable contributions when in fact they haven't. I won't name names here, but search for 'GPL SPDX' to see what I mean. Is it a contribution? technically yes? But the problem is intent: these users aren't benefiting technically from the fixes, which means there has to be another motivation for even spending a few mins on each issue, and for those who spam multiple projects with the same issue, I have doubts about their intentions to actually help the community.

This comes to the last part -- accountability. We currently go after users that are abusing the system instead of companies. Its not too hard to see what companies are behind the systemic patterns of credit abuse on drupal.org. Those companies who are seen abusing the system should be penalized or removed from the marketplace altogether.... and update the terms of use to clarify what qualifies as abuse. This would require some hard data/metrics so whatever team is doing the audit isn't relying on anecdotal evidence.

Greg Boggs’s picture

I like the idea behind focusing on what the company owners contribute. I am wary of the idea of rich people buying the top spots. I'm also wary to remove the credit ranking system. Despite a few bad actors trying to game it, the system has brought in a very large scale of contribution for ranking.

catch’s picture

* Credit is multiplied by usage
* Degrade over time as the last commit or release ages

This is already true as far as I know. Although we don't currently have contribution credit for releases, that's #2875447: Contribution credits for project releases.

catch’s picture

The other issue here, even if users are being honest about percentages-- they must self-select, and de-select companies they work(ed) for. For example, Acquia currently shows 756 people on drupal.org, even though more than half of those users no longer work there. And the only way to unselect users is by having a drupalorg admin (not even a site moderator can do this) remove those users from companies.

This is a good point. With issue credit you put the company that's funding it when you submit the comment, so while credit might actually be counted after changing job, it was true that the company was sponsoring you at the time you made the comment so it balances out. This is not the case with a static profile status that has to be pro-actively maintained.

We currently go after users that are abusing the system instead of companies. Its not too hard to see what companies are behind the systemic patterns of credit abuse on drupal.org. Those companies who are seen abusing the system should be penalized or removed from the marketplace altogether.... and update the terms of use to clarify what qualifies as abuse. This would require some hard data/metrics so whatever team is doing the audit isn't relying on anecdotal evidence.

Yeah something like this is worth looking into. While I think we should strongly encourage 'trivial contributions', neither the project not individual developers benefit from 'spam contributions', the only organisation that benefits is the company. When I see comments that look like they've come from a script, I generally assume a manager or company owner somewhere has told someone to do something. So if we can inform that company that what they're doing is actually going to harm their marketplace ranking and they need to change it, then either they'll stop completely (less noise), or they'll find a way to ask employees to work on actual useful contributions. Either way would be good.

GaëlG’s picture

My little account about comment #55:
Yes, we are in the 2nd kind of contrib. As a small agency, we prefer to contribute on "little" things that will be directly useful to our daily work. Most of the time, we need some bug to be solved, some feature to be implemented,...
Instead of waiting/paying someone else in the community to do the work for us, we do it, and it can sometimes take much time, even for a bug fix. We ensure our work can be reused by others so we provide patches, publish a contrib module,... If we are not able to fix some bug ourselves, at least we declare the existence of this bug or security issue.

We have dedicated slack time to work on tasks that benefit to our whole business, to all our future projects/clients,... Part of that time will lead to contribution, but not all. We never sit in front of our screen wondering "which d.o. issue will I pick today?".

So it's hard to measure, can vary, and the current system is an incentive for us to publish our work even more (though we would probably do it anyway, as we did before the credit system). A somewhat "fixed" time ratio would not be an incentive.

ressa’s picture

For issues, I wonder if fine tuning the credit system is worth considering, by adding more granularity when giving credit?

Some mammoth issues take years to complete, and much coding and reviewing, whereas others (such as the README.md updates I have been involved with lately) are minor, and faster to complete.

Would it make sense to make the credit system more granular, and add two tiers, on top of the existing? So, keep the standard credit, where you get 1 point, but we add two more options, the "Extra Credit" (+5 points) and the "Major credit" (+10 points):

  • Normal credit: 1 point
  • Extra Credit: 5 points
  • Major credit: 10 points

For minor involvement, requiring low effort and not much expertise, such as one-liner fixes, README.md updates, fixing spelling mistakes, etc. you would get one point.

For more elaborate tasks, taking longer time to fix and requiring higher expertise, the maintainer should be able to grant either 5 points "Extra Credit" or 10 points "Major credit" as well.

Extra credit 5 10 points

(Originally posted in #3327807: Suspend automatic issue credit for credit-gamers, but really belongs here)

catch’s picture

For issues, I wonder if fine tuning the credit system is worth considering, by adding more granularity when giving credit?

The problem with this is that manging the credit system already adds quite significant overhead on many issues - for example some issues have over 300 comments by over 50 people. It is a lot of work, but possible, to scan down the comments, find one that is credit-worthy, and assign credit. Trying to weight within an issue would be unmanageable. On top of that, a significant contribution to one issue might be the same effort as a minor contribution to another issue, so are we supposed to weight between issues too?

I do think it would be useful to weight issue credits by the priority of the issue (i.e. more for critical issues than minor), since that wouldn't require additional work for maintainers but would help to reflect effort a bit more than currently.

gisle’s picture

ressa,
for reasons releases already pointed out by catch, many maintainers do no not sit down and analyse the individual contributions made in an issue queue in order to manually assign credits, but rely on the automatic credits pre-assigmed by the system (currently, any posted patch and any opened merge request will receive exactly 1 (one) issue credit by default).

Introducing a granular individual credit will require project maintainers to manually analyse the contributions. I don't think this is practical and that in most cases. it will not happen. For this reason. I don't think introducing such a scheme will not improve the marketplace ranking algorithm.

catch suggests that replacing individual weights with weights based upon priority can be automated, which is correct. But IMHO, the priority assigned to issues is not a good measure of the amount of work, skill or effort required to fix an issue. A typo in a class name will totally break a release, which means fixing it is of "Critical" priority, but fixing it still is a very low effort task.

IMHO, making the issue credits more granular by manual or automated means is unfortunately not going to improve upon the marketplace ranking algorithm. I also agree a lot with comment #6 from DamienMcKenna where he writes:

As a general thing I don't think we want to add more red tape around the process via emails, surveys, new content types.