Meta issue for comments from http://blog.boombatower.com/woes-testbot.
14 other followers, no comments...
I found the post interesting. I didn't see a call to action in the blog post other than this issue so here's some feedback on your blog post. I am not sure if the critique about the Modernizing initiative is entirely accurate or not because the article seems to be incomplete. Could you clarify these claims with some qualifying statements?
Will butcher a variety of systems from their intended purpose and attempt to have them all communicate
Adds a number of extra levels of communication and points of failure
Will butcher a variety of systems from their intended purpose and attempt to have them all communicate
Adds a number of extra levels of communication and points of failure
Vast array of systems to learn and the unique ways in which they are hacked
I don't remember seeing any hacks of Linux, PHP, Drupal, Symfony, but I'm not sure. Maybe I missed something?
Status... Does not exist
This is not exactly true. A full testbot system is not in place on qa.drupal.org, but it is possible to test patches and run testbots.
On top of lack of resources the current initiative, whose stated goal is to "modernize" the testbot, is needlessly [ed. emphasis mine] recreating the entire system instead of just adding Docker to the existing system.
The strength of the blog post is regarding concurrent execution while its weakness is in evaluating the modernizing testbot initiative due to lack of information. I also did not see any information about some of the benefits of the initiative such as multiple environment testing with which to compare against.
With great respect to the work you did, and continue to do, it is insulting to the reader that you colored the "wrong" system red and the "right" system green, so we don't have to make our own decisions.
@cilefen, yeah, I kinda agree. The whole post seems passive aggressive, "I was right all along", "you guys are poopy heads", and "I DID IT ALL FOR FREE!" Which isn't to say the post is /wrong/. Just that the tone is a little harsh for forward movement.
Agreed on the tone (see @Morbus Iff's comment above), but I wanted to chime in as someone with a fleeting interest in helping with the test infrastructure for Drupal.org projects. A few simple bits of feedback:
In terms of general philosophy, I'm hugely in support of a more generalized platform like Travis CI over a highly specialized, project-specific platform like the current Testbot, or what seems to be the current state of ReviewDriven. Letting users have some control over their environment, their tests, etc. encourages experimentation, allows flexibility (don't want to use SimpleTest? Want to use PHPUnit in a special config? Want to use Behat with certain extensions? etc.), and can be scaled to at least match the current state of affairs, if not exceed it.
All of this information is news to me as I wasn't very involved during the time period described so I truly have no idea of what went on. However, I agree with what others have said above. I think the tone is disrespectful to the people currently working on testbot in the same way the author claims to have been treated. Based on the tone it seems like the sole goal of the article is perpetuating a cycle of disrespect. This is unfortunate. I think we all want to see progress in this area. The tone of the article certainly won't encourage me to support the author's proposal.
As an aside, it is really too bad that the talk about improvements to Drupal.org that happened at DrupalCon Amsterdam was not recorded as that session directly contradicts many points raised in this article. The team working on the new version of testbot is doing some really good work.
Frankly, I found the entire thing pretty disrespectful, especially to jthorson et al that are constantly working on maintaining and improving our testbot infrastructure. As to the suggestion to migrate to ReviewDriven, I believe the general response five years ago was that an open source community shouldn't be migrating to a proprietary, for-profit system when we have something free and open that works for our needs. Maybe it's not as "technically pure" or whatever, but it's open and it works.
I imagine that if you had released your work as open source and said, "Hey, I built this thing and I think it would work really well as a replacement to the current testbots", the response would have been much different. Instead, people say "Hey, I'm the maintainer of the testbot infrastructure, and I built a company around what I'm doing for the community and I think that we should just switch to paying my company money to take care of this stuff". I'm sure you can understand how that would be frustrating and extremely concerning.
In any case, five years ago, the CWG wasn't a thing. Now it is. If you feel that you have been treated unfairly by somebody in the community or that you were somehow wronged by the community not getting on board with Review Driven, I'd advise you to take it up in private with the Community Working Group via https://www.drupal.org/governance/community-working-group/incident-report. In the mean time, continually rehashing this discussion on Drupal Planet is unprofessional and unproductive, and to that end, I'm closing this issue.
@mradcliffe: There was no call to action, other than does anyone care that we are spending all this time re-inventing what we have with no clear benefit and many disadvantages? I tried having a call to action years ago, I even open sourced it (as I had always intended), and spent 3 months trying to deploy it...to be argued to death at every corner. The “unknown” sections are due to the new system not existing and thus there are unknowns. The other two systems are completed and therefore can be fully compared. The other half may be due to having wrote this 3 months ago, but alas the empty sections are at best equivalent and the conclusion thus stands.
The multi environment support exists in current system and in RD, but was never utilized in current system because Drupal lacked the machine resources. If you enable one new environment you effectively double your test load which does not work when testbot already backs up. DA has since started to pay for EC2 resources (as we suggested), but this is still an issue if you intended to test a number of environment variations. That is why the concurrency was the focus of the RD system since that is the root pain of all testing efforts.
@cilefen: I wrote two of the of systems in the comparison and spent years of my life working to make Drupal testing better in every way. I am therefore unavoidably biased in presenting my opinion and the basis for the opinion in the article. Would it not be more wrong to hide that and state that I am providing an unbiased opinion? And yet here we are focusing on your interpretation of my intent rather than any useful technical comparison. This is not useful.
@Morbus Iff: I did not include the backstory because I am trying to let the past go and stick with making the best decision for Drupal. Had I included it I could possibly agree your interpretation of the tone, but that was not my intent and I did a darn good job of keeping that out. That said...again why are we focusing on this still instead of something useful? I’ve never heard an apology for the blatant lies (and other unjustified comments)I was told...and yet I did not focus on that? Should I not be offended?
@geerlingguy: RD system is definitely not married to Drupal testing.
@Ryan Weal: That definitely is not the tone nor my intention. Had it been that I would have included the 10+ pages of backstory. Can we focus on a technical comparison and making the best decision for Drupal as that is the only useful thing left to do?
@cweagans: Was the response to my efforts (multiple times) not disrespectful? I provide a technical comparison regarding improvement. I am not suggesting we migrate to RD, merely the codebase which I open sourced many many years ago.
Next paragraph is ironic since I was open to that from the beginning and told that to everyone I talked to. I merely wanted people to consider the possibilities and how to fund the work since it sorely needed it. Lastly, I did open source the code. A fact those primarily involved know.
Continually rehashing? It has never been hashed. Not once has anyone sat down and looked at the problems facing the testbot both in terms of people helping, nor technical and evaluated the proposed solution. That is all I ask. Given my years of effort that seems like a more than reasonable request.
Excuse my ignorance of the history because I've been around for just about three years and I am trying to make sense of this. It is important to me that this community makes good decisions most of the time.
Taking your argument at face value, which you say is biased—and that's ok, it's your blog—ReviewDriven is technically superior. Logically, there were non-technical reasons it was "ignored by the Association and never evaluated by the community". Can you or anyone shed some light on what those could have been?
@boombatower: Great. Please provide a link to the source, as that's likely a hard requirement. Also, it sounds like there may have been some offline communication on the topic, because the only other issue I can find in the testbot queue that mentions ReviewDriven is here: https://www.drupal.org/node/1477308 (and it sounds like, at that time, people were open to the idea). Perhaps you can provide more detail on the conversations around the topic that took place, point to any related documentation, etc.
Finally back from Bogota, and figured I'd finally wade in here; as this discussion is missing some very important context.
First of all, I'd like to agree with the blog post author regarding the current status of the PIFT and PIFR solution. I have been trying to build support for an upgrade of the existing infrastructure since DrupalCon Denver; with little success. The writing on the wall regarding the need to update our testing environment has been there for a very long time.
After the closed-source ReviewDriven proposal was submitted (and subsequently declined) in early 2012, we held a discussion with the author at the close of DrupalCon Denver. This discussion circled around the possibility of open-sourcing the code, and leveraging it as the next generation of drupal.org testbot; resulting in the creation of the Conduit and Worker projects on Drupal.org.
5 months later, at DrupalCon Munich, I got up in front of the community and announced the intent to use ReviewDriven as the next generation testbot for Drupal.org, as part of my first testbot session presentation. At the time, we had a functional ReviewDriven server and worker set up on OSUOSL hardware, alongside the existing testbots. However, the D8 core test suite, which was passing on PIFT/PIFR, was throwing 27 false negatives on ReviewDriven at the time.
I was more than willing to deploy the ReviewDriven architecture, and committed to doing so if the ReviewDriven author could get the D8 core test suite to pass 100% on the new system ... but this is the closest we ever got.
I found it very difficult to move any issue through the Conduit queues; and after the ReviewDriven author began his new job in Mountain View in the fall of 2012, that became impossible. Once again, I found myself the sole active testing maintainer, working simultaneously on both the existing and proposed future testing architectures ... and despite having evangelized ReviewDriven as the go-forward solution, I couldn't even get commit access to the repositories (see #1771024: Requesting commit privledges for Conduit/Worker modules). This, combined with a sensitivity expressed by the ReviewDriven author that he perceived the community as "only wanting his code", I abandoned my efforts with ReviewDriven, and went back to focus on updating PIFT/PIFR to catch up with the ever-changing needs of Drupal 8 development.
With the benefit of the above context, it should be no surprise when I express my disappointment at the author's decision to bring this argument forward today; without first communicating with the team driving forward the DrupalCI initiative or taking the time to understand the goals, history, and evolution of the current project. He makes a number of legitimate observations regarding the current testbot initiative; but the majority of the 'technical comparison' to DrupalCI is based on uninformed supposition at best.
I'd therefore like to take this opportunity to clarify a few of the observations made in the original blog post.
Status:Does not exist Admittedly, progress on the DrupalCI project has not been very visible to the community, due to a lack of integration with Drupal.org. This is due to a conscious decision to adopt a 'local first' approach to DrupalCI development ... given that the bulk of testbot maintainer attention today is dedicated to troubleshooting 'it passes locally but not on testbot' queries, it is essential that end users are able to test using the same system/environment that the actual drupal.org testbots are using. In order to accomplish this, we have prioritized development of the test runner to facilitate local testing; with integration into the Drupal.org environment as a follow-up activity.
The original DrupalCI 'proof of concept' was developed and released to the community at Dev Days Szeged, in March of 2014; and has now been used by various community members (including those testing non-PIFR supported database systems) in a local testing capacity for almost a full year.
The migration from this 'proof of concept' to the 'final architecture' is currently in progress ... and the team has taken the extra step of committing to maintaining full functionality of the proof of concept throughout this migration process.
Initial deployment of drupal.org integration (with a limited feature set, not the full functionality) is currently targeted for April/May 2015.
The author describes DrupalCI as a 'mis-mash of languages' ... and for anyone looking at the codebase today, this is certainly the case. Given the team's commitment to maintaining functionality throughout the migration process, we are carrying around a lot of legacy code left over from the proof of concept; while at the same time introducing new and more modern approaches to portions of the production build. I can understand the potential concern one might have with the number of diverse technologies currently involved in the repository ... but counter with the assurance that only a portion of these shall survive to the final product.
The other consideration from a complexity perspective is related to the actual 'intent' of the system. PIFT/PIFR and ReviewDriven are intended as "Drupal-oriented testing platforms". DrupalCI, on the other hand, has been intended from the outset as something more ... a generic 'job automation' platform for end-user initiated drupal.org integration. Potential future plans for DrupalCI include considerations like automated patch conflict detection, automatic patch re-rolls, and automated interdiffs; all functionality which is not within the scope of either the existing testing system or the ReviewDriven platform. This extra mandate drives the need for additional complexity; and led to the decision to leverage existing best-in-class tools such as Jenkins; as opposed to building our own job dispatch and management logic within the platform itself.
Where choices exist, the DrupalCI team has aligned with existing standards used by the Drupal.org infrastructure team, to ensure that we were not introducing new and unfamiliar systems into the drupal.org environment. The Drupal Association has also made available time for existing infrastructure staff to participate in the design, development, and deployment of the DrupalCI architecture; to ensure their familiarity with the system and ability to maintain the platform over the long term.
The author's post centers around 'serial versus parallel' execution as the main driver for increased speed. While the parallelization approach used by ReviewDriven did result in faster test result returns, benchmark testing against PIFT/PIFR demonstrated that this speed came at the cost of about 10% additional overhead from a processing perspective; due to the need to reproduce the test setup stage on a new VM for each parallel arm of the batch.
The DrupalCI architecture has been designed to support this same parallelization approach; but is expected to deliver at a lower overhead cost thanks to the 'build one, spawn many' container approach enabled by the use of Docker (which, admittedly, ReviewDriven could also be modified to support).
As a generic job automation platform, the DrupalCI architecture has been designed with extensibility in mind. New plugins and functionality can be introduced through the introduction of self-contained PHP classes, which extend a base Job class containing a number of modular DrupalCI logic components. Each plugin defines its own 'build steps', with each build step able to leverage, override, or ignore the base logic components as needed. Plugins can even go so far as to define their own self-contained logic flow and structure; making it theoretically possible to (as an example) swap out the entire DrupalCI backend test runner for another test runner of choice (leveraging API interfaces to other systems, such as Travis or CircleCI).
This extensibility is built into the drupalci console application; but may not be immediately apparent if the observer were to only consider the proof of concept code.
The blog post author rightfully identifies that DrupalCI contains many more surfaces for attack, which all require proper configuration. Putting aside the irony of a security assessment of a system that, by definition, is intended to run arbitrary user-supplied code; the use of Docker containers is intended to help provide a level of isolation between the test runner and the associated infrastructure.
3rd Party Integration:
The DrupalCI architecture leverages a full API-based design, with both test requests and test results being passed through defined API interfaces. This architecture allows us to swap out the drupal.org integration, jenkins environment, or results display servers without affecting the code associated with any other component ... or alternatively allows us to potentially interface with multiple incoming test request or outgoing test result servers simultaneously.
As with any new system, there will be instability and there will be bugs. But we also recognize that a stable testing environment is especially critical during the push to a new major release. To this end, the DrupalCI deployment plans include a parallel deployment alongside the existing PIFT/PIFR system, and a lengthy 'soak' period for the platform before we would ever consider moving fully away from the existing architecture.
The 'serial execution' argument mentioned here has already been explained above; with an architecture that will support parallel test execution with lower processing overhead than the existing VM-based ReviewDriven system.
The blog post mentions that DrupalCI is "... intended to include automatic EC2 spin up, but does not yet exist". However, end-to-end test runs, demonstrating the entire path from our entry API through to a on-demand dynamically created EC2 test runner, have been functional within our development environment since somewhere around the DrupalCon Amsterdam time frame.
This is an area that is currently lacking in the DrupalCI development efforts, but unit test coverage of all PHP code in the solution is intended to be added prior to full release to the Drupal community.
See '3rd party integration' above.
3rd Party Code:
The DrupalCI architecture has been designed to support not only Drupal.org testing, but also deployment of the end-to-end system by individual organizations, as fully private DrupalCI deployment instances.
Currently the DrupalCI system supports simpletest testing and travisci.yml test instruction compatibility (using Travis's own build code). Fast-follow plugin plans include phpunit-only testing, phpCodeSniffer testing, and a number of issue queue automation tasks mentioned earlier (such as automatic interdiffs and patch rerolls).
The blog post assumes the use of Jenkins as the end-user interface for test results; but due to the same limitations identified in the blog post, this is certainly not the intent. The DrupalCI project intends on using a standalone Drupal 8 site to present build artifacts and results ... this initiative is still under development, and a request for community feedback has been put out as part of the introduction to this portion of the project; as documented at https://groups.drupal.org/node/451023.
If you made it this far, congratulations and thank you for sticking with the discussion ... hopefully this post helps provide some of the context which was missing on the original blog post - additional context which may help support a more fair and informed technical evaluation of the two solutions.
And, as always, I encourage anyone interested in the DrupalCI intiative to contact myself or any one of the DrupalCI development team to get involved ... we can be found hanging out in the #drupal-testing channel on IRC.
While the prior comment is lengthy, it contains nothing that changes the conclusion in the original post. Rather it reinforces those conclusions. This comment will respond to many of the technical topics mentioned, since that is my focus; and will ignore some of the other topics since they are non-productive.
Nothing that Jeremy writes challenges the original conclusion (i.e. DrupalCI is an unnecessary attempt to recreate and mimic RD) and fall into one of the categories:
What is lacking are any clear benefits of the new system that would justify abandoning the RD system to start anew. The PIFT and PIFR solution has been working just fine and could be updated to incorporate any of the items discussed. The reason I chose to start anew was to take advantage of many features Drupal 7 brought to the table and in so doing reduce the total amount of code, make it more flexible, and most importantly open up a world of possibilities with the resources freed up through parallel execution.
I wanted to make it clear how simple incorporating Docker would be so I spent some time this weekend to accomplish it. In just two hours, without any pre-planning (I haven’t touched the code in years) or changes to the code base, this was completed. The same approach could be applied to PIFR and provide everything mentioned in regards to the “local first” approach. In fact given a couple more hours I could add a CLI interface identical to DruaplCI.
The “local first” approach suffers from the speed limitations of running on your box. One must ask yourself, “Why would someone want to run the tests on their box when there are enough resources to run them on a fleet of more powerful machines?” Developers have been rapidly uploading iterations of the same patch to take advantage of the more powerful testbot machines.
All along, my plan was to integrate this workflow into the tooling by providing a CLI tool to execute jobs on the testbot fleet, just like patches, and respond back to the CLI tool. (Interestingly, I have since discovered this is exactly what is done at Google.)
The problem is not so much being able to run the tests on your local machine, but rather getting results quickly enough to not interrupt your workflow. Backed by RD parallelism and using such a command-line tool, the iteration time is reduced by an order of magnitude (along with developer frustration due to workflow distractions, i.e. generating and submitting a patch to trigger the testbot).
To be clear the core of the DrupalCI effort is a simple Docker wrapper that starts up a database container and a web container that links to the database container, and runs the tests. The only benefit from this are Docker containers with all of the dependencies listed. This can be seen in the code (http://cgit.drupalcode.org/drupalci_testbot/tree/containers/web/run.sh#n461) which can be paraphrased as follows:
# mount workspace inside of container, link to db (i.e. expose ip/port), and execute run-tests.sh
docker run --name db mysql
docker run -v /path/to/workspace:/var/www --link=db:db run-tests.sh …
It is unclear where fallacies arose regarding the RD system being tied to Drupal testing, but I will put them to rest. The Drupal-specific bits are not even in the base system; a separate module provides them (in a separate repository). It already has support for running commands prior to testing or even specifying an arbitrary command in lieu of a more specific plugin (that’s all TravisCI does to be generic). Anything more can always be added as the system fully leverages Drupal’s flexibility. I had plans to support testing for various languages. The system lets plugins specify the queue they should be placed in and workers which plugin types they support; the queue items are then consumed by workers of that type.
It would be trivial to build a C++ compiling environment and have those jobs be processed independent of PHP jobs on the same deployment. One could also use that system to provide levels of priority for Drupal, like core first, then popular contributed modules, then others. To say that “patch conflict detection, automatic patch re-rolls, and automated interdiffs” are not in the scope of RD is ironic considering the plugins were functional years ago and were ideas I had before even starting the system.
Even PIFR was flexible enough to provide on-demand demo sites for functionality or UI reviews without the need to have a local Drupal environment (https://www.drupal.org/project/pifr_demo).
Just as the current testbot uses cached git repositories to speed up checkouts, so too does the RD system. Caches were not setup on each of the RD workers (unlike PIFR) as the RD workers were intended to be ephemeral (i.e. started up and shutdown on demand). Instead a proxy was setup to provide the caches and thus eliminate most of the overhead. Additionally, RD has other tuning mechanisms that could be adapted for d.o infrastructure, especially now that the testbot workers are all on AWS.
I was taken aback by your claims of 10% overhead; so for curiosity I ran some tests on EC2 this weekend using the Docker image mentioned above (and without the git cache). What I saw was exactly what I expected which included very little overhead. The metrics below include “duration” (total machine processing time (sum of all machines)) and “elapsed” (real time spent since job was started) values.
full core, 5 c4.xlarge
duration: 41 min 11 sec
full core, 10 c4.xlarge
duration: 36 min 36 sec
full core, 2 c4.8xlarge
duration: 8 min 29 sec
With the additional optimizations present in the RD infrastructure, these numbers get even better. So worst case scenario with 10 machines (second data set), no cache, no optimization I saw ~1.2% overhead (roughly 3 seconds to clone core with a depth of one and perform initial Drupal install). With optimizations the overhead very nearly approaches zero percent. Keep in mind there is a delay between the start of a machine without the optimized approach which means the elapsed time includes up to 60 seconds of idle time per machine (not present with optimizations).
These numbers are impressive on their own, not to mention what an optimized setup looks like. Considering that anytime I look at the qa.drupal.org test client list (worker machines), about 70-90% are idle. This is like having 70-90% overhead since those machines are wasted. Dropping to even 10% is a huge improvement, much less 1% or less. Additionally, with automatic startup/shutdown of machines the waste diminishes further.
I found no evidence to support a claim of a parallel approach in the DrupalCI code I looked at (and there is very little code) nor reference in documentation. Even using Docker would require the generation of an image with code checked out and install completed on one machine and then copied to others. That means the other machines are idle while they wait for the image to be created. In other worlds all the other machines would have to (A) wait the full startup time and then (B) copy the image before being able to start. By definition this should be longer since it includes the extra step of transferring image (B). A + B will always be greater than A assuming positive lengths of time (excluding time travel).
All I saw was the ability to start up different environment containers (all provided by Docker) which does not provide parallel processing. If you start tests in three environments on a single machine it takes three times as long to get results. That is not equivalent to RD parallelism, not even close. Even if parallelism does exist (or is planned to exist) it is yet another feature being duplicated and not an improvement. As I see no evidence or mention in documentation the lack thereof is a huge detriment to DrupalCI as resource utilization is the primary problem facing the testbot.
Contrary to your claim, my post does not assume Jenkins to be used as the end-user interface. Rather it states clearly why Jenkins is not suitable for this use. Job dispatching is the only useful bit Jenkins provides and that is a very small part of the overall system architecture. The Jenkins approach to dispatching hinders parallelism. Because of this, it is not worth the investment to configure/run Jenkins/Java.
Given the results interface is the piece that everyone uses every day, it is of pivotal importance that it properly exposes the functionality of the system. It is the most important and complex piece to do well, thereby making it the component of greatest risk. From best practices in the software development world, the interface is the piece to be completed first.
In contrast, the RD interface is one of its highlights as it provides a flexible, concise, and well-engineered UI. It beautifully and properly leverages fields and views, among other things.
As was the premise of the original post, the DurpalCI project is an effort to recreate what we have in RD with no clear benefits but definite disadvantages. I would like to put behind us all the mistakes that have been made and move forward in the best interest of Drupal. In advance of a proper solution to the funding problem and unless there are some glaring technical reasons for not using the RD platform, I plan to reclaim my “hijacked” project and continue work towards deployment. I would welcome help from anyone to join me in rolling out the RD codebase.
[NOTE: To support my claim of ownership to the testbot, regrettably I must include some context.
Something that seems to be lost here is the fact that I created the testbot. The RD proposal was about funding critical drupal.org infrastructure and making it self sustaining. Contrary to people’s assumptions, it was not me proposing to update my project to use the latest code. (RD is essentially version 3.0 of the testbot.)
Due to this confusion, during subsequent discussions it was assumed I had to run everything by jthorson (example https://www.drupal.org/node/1477308#comment-5723686, but the sentiment long predates this comment). Somehow over the course of a couple months of being minimally involved in PIFR (due to waiting for a response to the RD proposal and the imposed moratorium of deployments to the testbot), my project was given away. I would refer people to the official workflow for transfer of ownership (https://www.drupal.org/node/251466) which was not followed in this unwanted and hostile takeover. Would anyone question Dries making an executive decision in regards to Drupal core even he has been minimally involved in its direct development of late?
The only reason the takeover was possible was because I had freely given out co-maintainer status and testbot access to anyone interested in helping. In so doing they could appear as leads and did little to dissuade people of that notion.]
I read the last post and think the same as https://www.drupal.org/node/2425683#comment-9620593
Since I intend to propose MongoDB for core at DrupalCon LA I have work interest in having an automated testbot which can support multiple databases. I have written the drupalci_testbot integration with mongodb and so I have some understanding of that system. I have also coded a proof of concept at the end 2012 that distributed the tests to a number of openstack hosts so I have some understanding of the challenges that raises as well. But all this does not matter. I could (and did) write about some technical matters here and deleted them.
Because right now there are several people working on DrupalCI now (thanks!) and while technologies can be debated, this is a fact. Is ReviewDriven technically superior? That, alas, doesn't matter. Where will the people, multiple people, come from to help if we now say "hey let's scratch DrupalCI and use ReviewDriven instead"?
@bombatower: Can you please link to which projects had their ownership transferred away from you and where you no longer own or have access to thwm.?
Claim: DrupalCI is an unnecessary attempt to recreate and mimic RD
I do not challenge the statement that DrupalCI is recreating and mimicing functionality of ReviewDriven. However, I challenge the 'unnecessary' portion of this comment. No matter how you try to spin it, the fact is that Randy got involved with the testbot, and me after him, because noone else was doing it. This was not a 'hostile takeover' ... this was a 'mop-up' after a maintainer of *key* community infrastructure got their nose out of joint and walked away. All I wanted to do was learn what we had for automation capabilities that could be used in the Project Application queue ... and instead, I ended up inheriting a system that had been neglected and abandoned by it's original maintainer.
You stated yourself that you haven't touched the ReviewDriven code in years ... how was I, or anyone else in the community, supposed to perceive this as anything else but an abandoned project? Because the fact is, it *was* abandoned. The Review Driven proposal was written years ago, the repository has sat idle, and the testbot queue has not seen even a hint of your presence in years. I didn't pursue the deployment of the ReviewDriven solution after you disappeared again, SPECIFICALLY because of the concerns you expressed in Denver: that you perceived the community as "taking advantage of you and only wanting your code". I made a conscious choice to abandon those efforts, because I didn't want to continue down any path which would further perpetuate this mis-perception in your head ... so I walked away from your code, despite the obvious benefits it could provide us, in order to respect YOUR sensitivity and avoid the exact 'perceived ownership' problem that you are now trying to drag up with respect to PIFT/PIFR.
I would have been more than happy to deploy ReviewDriven, which I committed to doing (or rather, evangelized!) at DrupalCon Munich ... but for this to happen, I needed you to finish the job - and eliminate the 27 false negatives which ReviewDriven was throwing during a D8 core test run. This never happened. We couldn't launch while the core test suite was failing ... Where were you then?
Let's assume, for a moment, that you *are* available and willing to put time back into improving the automated testing capabilities on Drupal.org (and this isn't likely to be followed by another tantrum and storming off at the next difference of opinion you encounter within the Drupal community). If you're serious about this, then I applaud, welcome, and encourage you ... it would be fantastic to have you back in the community, and to be able to leverage your knowledge and experience within the community's automated testing efforts. But I must point out, you are taking entirely the wrong approach. This is an open-source community, and a do-ocracy. There is no ingrained right of ownership to any component, by virtue of having been involved years ago ... if you are looking to re-engage, and genuinely want to help out the community, then approach those currently working in that area and ask how you can assist. Approach us and have a conversation in person, instead of passive-aggressively trying to stir up a debate, and doing an end-run on our efforts through a 'look at me!', flag-waving, self-gratifying, inflammatory blog post. If you want to build support for your argument within the community, please consider the 'collaboration, not competition' mantra that this community prides itself on ... try working *with* people, and you might find people are willing to start listening.
You asked for the benefits of the new system that justify abandoning the ReviewDriven system; and then did a technical comparison. This is where you went wrong ... because the most critical of those benefits are *not* technical. The true benefits are:
- Having a dedicated GROUP of developers/maintainers knowlegable of the system (i.e. no single point of failure, which is the issue that bit us repeatedly on PIFT/PIFR, and prevented the rollout of ReviewDriven years ago).
- Having an architecture that is aligned with (and using a common toolset with) the rest of the environment managed by the Drupal infrastructure team, so that the testbot is no longer a 'black hole' that they have no visibility into.
- Having a modular system that supports a distributed delegation of responsibility for individual components within the community, instead of burdening any one individual with the maintainer accountability for the end-to-end solution.
- Having an open-source system that is developed from the ground up, with full transparency, in front of the community from day one.
If you want to see ReviewDriven deployed *and accepted* within the Drupal community, I'd suggest that these are the benefits that you need to answer to. And, unfortunate as it is, you are starting from a point of limited credibility; as someone who has abandoned the community not once, but *twice* in the past (though I realize that you certainly don't perceive it this way).
Again, I encourage your return and re-engagement; and would more than appreciate your assistance in the development and deployment of an automated testing infrastructure that is best positioned to meet the *current* needs of the Drupal project ... and I'd be happy to entertain at any time any questions you might have about the DrupalCI initiative, the architecture, or it's goals - so that you can then make a truly unbiased and fair assessment of DrupalCI against your own solution.
With that, I'm going on record and saying that deployment of DrupalCI needs to be the *current* focus of any automated testing efforts, given that the capabilities it is intending to enable are critical blockers towards any potential Drupal 8 release ... and to evaluate any alternative solution at this point in time would unnecessarily detract and delay from those essential deployment activities. So I'm marking this as 'won't fix'. It *is* a viable solution, but frankly, this is a case of "too little, too late".
If you insist on pursuing the claim that this was a 'hostile takeover' of your project, then I would encourage you to take your argument to the Community Working Group; which can handle this in a more professional
(and appropriate!) forum than inside a particular drupal.org project issue queue.
I agree that I don't think there was any hostile The Drupal.org developer community required changes and improvements, and the team continued to do that by incrementally working with the existing code. If ReviewDriven had been able to overcome the hurdle of actually being able to complete the D8 test suite without false errors, I don't doubt the possibility that things may have taken a different path. But that never happened, and I don't see that point being contended.
I'm happy with the current direction of DrupalCI because the team has been actively involved, accountable, transparent, and communicative, at major Drupal events and the times in-between.
Agreed if this is to be discussed further, the Community Working Group needs to be involved.
@chx: You seem to share the false impression that PIFR/RD do not support multiple databases. I can only guess where you received the basis for such an impression. With you being a seasoned developer it surprises me that would not realize how trivial it would be to add such support or investigate the veracity of the claim. Regardless both PIFR and RD support multiple databases as is evident from the links below.
@Dave Reid: For those who do not remember, I was very active in publishing my activities and plans for the testbot (when not being derailed), from extremely detailed workflows and technical discussions, to YouTube demonstration videos, DrupalCon sessions and sprints, and IRC. Please refrain from making false statements.
The "false errors" you and Jeremy refer to are not an indication of RD failing or something "to overcome" but rather that RD is working correctly as it is reporting test failures due to improper environment configuration. This is not a new issue facing the testbot (see very old example https://qa.drupal.org/node/39).
To imply that it was easier to build an entirely new system rather than fix the environment is truly astounding. Any environment issues would have to be fixed in Docker or any other approach as well.
Continuing to apply this "logic" we should scrap Drupal the next time a bug report is filed and begin anew.
The overwhelming sentiment expressed on this issue is, "Any injustice can be dismissed as long as agreement exists between at least two people." Is this the face of Drupal? Does this not walk contrary to the code of conduct and other guidelines regarding community interaction? Should we not be more concerned about the deterioration of community respect for members instead of some imagined traction for an admittedly inferior design?
I've gone ahead and filed a Community Working Group incident report because I feel uncomfortable with this issue and the tones being taken mentioning the code of conduct.
"The deterioration of community respect for members" started when you chose to go over the top on the people working on this project, instead of offering your opinion and advice to them, or making an effort to understand our vision. You are now propagating this further by projecting 'shared false impressions' and accusing others of making 'false statements' where none exist.
Nor has anyone implied that "it was easier to build an entirely new system rather than fix the environment". This is a statement of your own fabrication. The closest thing to this statement is that I chose not to proceed with ReviewDriven, because you went AWOL and had expressed a sensitivity to the community using your code without your involvement. Never have I suggested it was "easier" to blaze our own trail.
With your "Any injustice can be dismissed as long as agreement exists between at least two people" comment, the resistance you're encountering is because you haven't got agreement that there was any injustice in the first place. The perception of injustice is yours alone ... though I know a number of people who would support a claim that your approach and persistence in raising this issue is performing an injustice on it's own accord. Attempting to compare a finished work versus a work in progress, without attempting to truly understand the final vision, is equally unfair.
It's easy to prove a point when you're fabricating the opposing arguments. Unfortunately, this approach does nothing to build support for your own cause. :/ Again ... if there's some injustice to be investigated here, take it to the CWG.
In the meantime, we can look at synergies with ReviewDriven once we've deployed the initial DrupalCI release, which unblocks the critical path towards a Drupal 8 release ... but in the meantime, please respect my time, effort, opinion, and decision to not waste more cycles debating this with you, when there are more pressing matters at hand.
The history and state prior to Thorson's arrival is sorely misrepresented, possibly due to lack of all the information. I was forced to put off deployments of testbot (as webchick and others can attest to) due to the ill-fated Drupal 7 freeze. I continued development on testbot for 8 months after in anticipation of eventually deploying the features. It became clear a rewrite was more beneficial (for the reasons already stated). The system ran just fine with minimal daily involvement by anyone much less myself (already been stated). After derailing efforts the features were deployed overtime and credit misplaced and happily accepted by others. The freeze is what derailed everything and the sentiments regarding ownership as (shown above) continued the derailment not some imagined abandonment. There is endless context to back these things up, much of which is publicly available should anyone choose to do actual research.
Any involvement of new contributors could have been applied to understanding and deploying RD just as easily as starting from scratch. That can be said of virtually all the statements made. The continued mention of D8 blockers never gets backed up with facts on how PIFR or RD could not be used to solve them in 5 minutes.
To ask that I come to the current contributors is truly ironic considering Thorson never did the same for me when making unnecessary and flat wrong changes to PIFR. The attitude of taking over was perpetrated from the beginning; it just took me too long to realize it.
It is truly amazing that I get berated as disrespectful when I provide reasoned arguments with facts to back them up. In contrast the comments here (even about disrespect) lack support and are far more disrespectful than anything I could imagine writing.
If you apply Thorson's statements to himself you have something that begins to resemble the truth.
It is impressive the amount of misinformation that has been manufactured in such a short time. To a casual reader it even sounds somewhat cohesive. The problem is (as many books and articles will backup) that dispelling misinformation is an order of magnitude harder than creating it. As such I have been exhausted trying to provided reasoned arguments to the endless stream of false information (even admitted to be so by several). This is the same problem I ran into years ago when trying to "collaborate" with Thorson. I became burned out (over several iterations) of having to endlessly argue and provide reasoned arguments against misinformation.
If anything Thorson excels at the manufacture of misinformation for the purpose of winning public opinion in his favor. It is truly impressive. Even more confusing is the folks chiming in who simply lack the facts and did not bother to look anything up, but seemingly chose a side and stuck with it regardless of facts.
Again I find myself exhausted facing the endless walls of misinformation, especially after it has been admitted that I was right. I would be happy to write a letter of recommendation for your career as a PR campaign manager as it clearly suits you.
The last post makes it perfectly clear that there is more to this than a 'technical evaluation' ... given that your latest comment has *nothing* to do with testbots or an evaluation between the options, and the whole issue has obviously become emotional enough for you to degrade to personal attacks, then it's obvious that there is nothing else to gain from continuing this conversation.
So with that, I'm closing this issue again, for the THIRD time since it was opened; and the fourth, if you include the postpone.
Please take the hint.
The Community Working Group has received some reports about this issue. 1 formal report, and a couple of informal "heads up" notes via IRC.
A few of you have invoked the code of conduct, and I can see why. Perhaps we should all take a moment to re-read it.
Then, take some deep breaths, and start with some apologies all round?
Status wars on a clearly contentious thread aren't a helpful, or constructive way to resolve any issue. Making accusations about what other people may or may not have done not done, or intended to do, is also not going to get this issue resolved constructively.
So, I'm marking this issue postponed, until such time as the code of conduct issues are resolved.
Note: The CWG does not make technical decisions. We won't be arbitrating anything to do with CI or ReviewDriven or the Testbots. We are purely concerned with HOW we collaborate, not about WHAT we're collaborating about. I believe we all want the best for our project and community, so let's work together on that.
I can see there's a lot of history here, I've not had a chance to dig into it, nor do I have time to do so. The CWG will be discussing the issue.
In the meantime...
@boombatower, @jthorson @Dave Reid - I'd like to draw your attention to the conflict resolution process (CRP) outlined here:https://www.drupal.org/conflict-resolution
Dave - thanks for submitting this report.
@jthorson and @boombatower - have you tried to chat about this in real time as our CRP recommends?
Is someone who does know the history here willing to mediate?
Postponed pending resolution of code of conduct issues.
Sorry its been a while but to update, I've reached out to the folks involved and am hoping to speak with them this week. As Donna mentions, the involvement of the CWG is regarding how we collaborate and we will not be making any decisions about the technology. This side of things is driven primarily by the DrupalCI group and the Drupal Association.
I expect to have an update after discussions with those involved and then again with the CWG with the aim of feeding back any relevant outcomes within the next 2-3 weeks.
In the meantime we thank everyone for respecting the Code of Conduct and hope this issue will be resolved soon.
Just one small clarification:
the involvement of the CWG [...] will not be making any decisions about the technology. This side of things is driven primarily by the DrupalCI group and the Drupal Association.
The Drupal Association (and really, joshuami as the CTO) ultimately owns the technology decision, not the DrupalCI group. The DrupalCI group is a collection of various volunteers working on https://www.drupal.org/project/drupalci and assorted sub-projects.
Please excuse this misunderstanding. The CWG has specifically asked adshill to take this one since he has conflict mediation experience, but is not affiliated with the Drupal Association, was not involved in any of these discussions previously, and isn't a core/contrib developer, so can truly come in with a neutral point of view.
Unlike the greedy, the deceitful, the jealous, and the idiots too stupid to see the triumph of beautiful creation, I never wanted anything more than to offer my talents to the community. Unlike the who’s whos of the Drupal community I did not sit atop an empire of money nor desire one. All I desired was to continue providing my talents to better Drupal in significant ways.
Unfortunately for me the world demands one make money after finishing schooling in order to merely exist in the world. This is not something which is optional even if you wish to give away the fruit of your talents.
Instead of greeting my attempt to continue doing what I loved and to continue providing the Drupal community with what it desperately needed, I was met with scorn, ridicule, utter stupidity, and lies. To listen to such foolishness is one thing, but to experience the utter lack of support by the people who know better is disheartening. No one stood for the truth. No one gave a damn. The old adage was reinforced, “The only thing necessary for the triumph of evil is for good men to do nothing.”
Since I wanted the best for Drupal more than anything I curtailed my hope of providing sustainable contributions and once again offered the fruits of my labor completely free to those who had scorned me. Instead of a belated, welcome reception or even simple indifference I was met with ignorant resistance from vultures who preyed on my weakened status. Not content with the assassination of my character they busied themselves with the destruction of my work and ensured it would not proceed any further. Still none of my “friends” and colleagues within the community stood up for me, much less what was clearly best for Drupal.
And still after all this, I tried twice more.
I was never given a technical, much less valid, reason for all the hatred I received. Such a strong word is not capable of properly describing the disgusting venom and irrational emotion that has spewed forth by those opposed to reason.
All the technical points I made have continued to make themselves apparent as those now involved finally get their hands dirty to understand the true problems involved in this system. One of the best examples is the number of hours wasted over two and a half years by supposedly near a dozen people on this effort. In contrast the admittedly superior solution took a single individual two months to create from scratch, and even the current solution a month. My reasons for using Drupal as a base library since it is common to those most likely to contribute to the Drupal QA effort have now been realized as the new effort has imported a large amount of code from Drupal 8 and takes on the burden of maintaining a fork. Just look at the size of the code base and the over-complexity of the components involved, much less the fact that the resulting output interface is primitive by comparison, and the end result represents absolutely no forward progress. Ironically, even some of those involved have lamented these issues publicly.
If one goes back and reads the solutions I proposed you will see their great importance and value to Drupal far exceeds any of the pointless efforts of late. Drupal QA has halted on nearly all fronts since my exile over five years ago. Instead of having made significant improvements over the years, Drupal has nothing but the same thing it had before except a hoped for reincarnation as a hideous beast.
I used to imagine what we might be working on a decade into the future in Drupal. I used to become concerned about how Drupal might fall apart. I used to strive to make Drupal its best. I used to give almost every free moment I had to Drupal. After all that I was cast out without so much as the slightest resemblance of reason. The Drupal community has managed to drive away one of their most dedicated contributors, a truly staggering feat by all accounts.
I was naive to think a community with the slogan, “Come for the software, stay for the community,” was actually driven by reason aimed at creating the best software. I was naive to think the world made decisions based on facts and a logical interpretation of them. This tragedy began a journey that has opened my eyes to the reality of the world and I see these events fit in perfectly.
Looking back at the history of innovation and technology it is clear the best technology almost never wins and if it does it has little to do with its technological supremacy. Literally everything has been tainted by the hideous reality of the world, even things held in high regard such as: nuclear reactors, the steam engine, computers, VHS, Microsoft, and countless others throughout the ages. In fact the pattern is so resoundingly clear that I now realize it is an honor to be included among history's disenfranchised great minds.
@adshill Any update here from you or the CWG? If not, I propose we lock this thread. I think it is no longer beneficial or productive for anyone involved to continue.
Agreed. I think everyone here has had their say.
I agree. I went ahead and locked the issue. If anyone from the CWG wants to chime in here and doesn't have the necessary permissions to unlock comments, send me a note through my contact form and I'll unlock it again.
The Community Working Group has agreed that while we considered this matter closed at this time, we would post a final update in this thread in the interest of transparency.
The CWG spoke to the involved parties after this issue was referred to us last spring. Based on those conversations, our determination was that no further action was required at the time, but that we would continue to monitor the situation in case additional intervention became necessary.
As kattekrab, gdemet, and webchick have pointed out, the CWG does not weigh in on technical decisions regarding the Drupal.org infrastructure; those decisions are made by Drupal Association staff in consultation with the Drupal.org Working Groups.
We understand that not everyone is happy with the decision that has been made and that some individuals feel intense frustration about this issue. As the Drupal Code of Conduct (https://www.drupal.org/dcoc#respect) states:
The Drupal community and its members treat one another with respect. Everyone can make a valuable contribution to Drupal. We may not always agree, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. It's important to remember that a community where people feel uncomfortable or threatened is not a productive one. We expect members of the Drupal community to be respectful when dealing with other contributors as well as with people outside the Drupal project and with users of Drupal.
Everyone involved in this issue has had ample opportunity to make their voices heard, both publicly and privately. While we will continue to keep this thread locked, if anyone feels that there are additional community issues that need to be addressed by the CWG, they are welcome to bring them to our attention by posting an issue in our queue (https://www.drupal.org/project/issues/drupal_cwg) or filing a private incident report (https://www.drupal.org/governance/community-working-group/incident-report).
While we hope that no additional intervention will be necessary, please note that the CWG reserves the right to take additional action if necessary.
Drupal is a registered trademark of Dries Buytaert.