Why test?

The goal of automated testing is confidence: confidence in application stability, and confidence that new features work as intended. Continuous integration as a philosophy is about speeding the rate of change while keeping stability. As the number of contributing programmers increase, the need to have automated testing as a means to prove stability increases.

This post is focused on how the automated testing infrastructure on Drupal.org works, not actually writing tests. Much more detail about how to write tests during Drupal development can be found in community documentation:

Categories of testing

DrupalCI essentially runs two categories of tests:

Functional tests (also called blackbox testing) are the most common type of test run on DrupalCI hardware. These tests run assertions that test functionality by installing Drupal with a fresh database and then exercising that installation by inserting data and confirming the assertions complete. Front-end tests and behavior driven tests (BDD) tend to be functional. Upgrade tests are a type of functional tests that run a full installation of Drupal, then run upgrade commands.

Unit tests run assertions that test a unit of code and do not require a database installation. This means they execute very quickly. Because of its architecture, Drupal 8 has much more unit test coverage than Drupal 7.

These test categories can be broken down further into more specific test types.

What testing means at the scale of Drupal

Drupal 8, with its 3,000+ core contributors and 7,288 contrib developers (so far), needs testing as a means to comfortably move forward code that everyone can trust to be stable.

Between January and May 2016, 90,364 test runs were triggered in DrupalCI. That is about 18,000 test runs requested per month. Maintainers set whether they want tests to run on demand, with every patch submitted, or nightly. They also determine what environments those tests will run on; there are 6 combinations of PHP and database engines available for maintainers to choose from.

The majority of these test runs are Drupal 8 tests at this point. (19,599 core tests and 47,713 contrib project tests were run during those 5 months.) Each test costs about 12 cents to run on Amazon Web Services. At the time of writing this post, we averaged around $2,000 per month in testing costs for our community. (Thank you supporters!)

An overly simple history of automated testing for Drupal

Automated testing first became a thing for Drupal contributed projects during Drupal version 4.5 with the introduction of the SimpleTest module. It was not until Drupal 6 that we started manually building out testbots and running these tests on Drupal.org hardware.

In Drupal 7, SimpleTest was brought into Drupal Core. (More information about what that took can be reviewed in the SimpleTest Roadmap for Drupal 7.)

In Drupal 8, PHPUnit testing was added to Drupal Core. PHPUnit tests are much faster than a full functional test in SimpleTest—though runtest.sh still triggers a combination of these test types in Drupal 8.

The actual implementation of automated testing was much more complicated than this history suggests. The original testbot infrastructure that ran for 7 years on Drupal.org hardware was manually managed by some fiercely dedicated volunteers. The manual nature of that maintenance led to the architecture of DrupalCI, which was meant to make it easier to test locally at first and later focused on autoscaling on powerful hardware that could plow through tests more quickly.

DrupalCI's basic structure

In The Drupal.org Complexity, we could see the intricate ways that Drupal's code base interacts with other parts of the system.

Representation of the relationships between services and sites in the Drupal.org ecosystem.

We could further break out how systems like DrupalCI are interrelated.
Highlighted relationships between testing and other services.

DrupalCI is a combination of data stored on Drupal.org, cron jobs, drush commands, and most importantly a couple of Jenkins installations to manage all the automation.

Jenkins is the open source automation server project that makes most of the system possible. We use it for automating our build process and deploying code to our dev, staging and production environments. It automates just about anything and is used by companies small and large to run continuous integration or continuous deployment for their applications. It's considered a "best practice" solution alongside options like Travis, CircleCI, and Bamboo. They all have slightly different features, but automation is at the core of most of these DevOps tools.

To provide continuously integrated tests, you need to trigger those tests at a moment when the tests will have the greatest value.

The three triggers for running a test job are when a patch is added to an issue comment, when code is committed to a repository or daily on a cron. Maintainers can specify which triggers are associated with which branches of their projects and which environments should run those tests.

For core these settings look something like this:

Screenshot of the automated testing settings for Drupal Core.

This detail allows for specific tests to run at specific times per the Drupal.org Testing Policy for DrupalCI.

To make this automation happen, we have an installation of Jenkins (Infrastructure Jenkins below) that is polled by Drupal.org once per minute with testing jobs to be queued.

These jobs live in a database record alongside Drupal.org.

The infrastructure jenkins instance polls Drupal.org once per minute looking for new jobs to queue.

Infrastructure Jenkins speaks to the CI Dispatcher (also Jenkins) where the testing queue regularly passes those jobs to available testbots. CI Dispatcher uses an Amazon Web Services EC2 plugin to spin up new testbots when no existing testbot is available. Each testbot can spin up Docker containers with the specific test images defined by the maintainer. Theses containers pull from DockerHub repositories with official combinations of PHP and database engines that Drupal supports.

The CI Dispatcher maintains the queue of jobs to run. When a job is ready, it uses an EC2 plugin to use an existing testbot or spin up a new bot as needed.

After a testbot is running, the CI Dispatcher is in constant communication with the bots. You can even click through to the console on CI Dispatcher and watch the tests run. (It's not very exciting—perhaps we should add sound effects to the failures—but it is very handy.)

Once the testbot has been spun up, the CI Dispatcher listens to it for results.

Once per minute, Drupal.org polls the CI Dispatcher for test status. It responds with pending, running, failed or passed. Failed and passed tests are then pulled back into Drupal.org for display using the Jenkins JSON API.

 pending, running, failed, passed. All the results are pulled back into Drupal.org using the Jenkins' JSON API.

Tests can also be run on demand at the patch, commit or branch level using the handy "add test" and "retest" links.

Why did we build this ourselves? Why not use [insert testing platform here]

Lot's of people have asked why we don't use TravisCI, CircleCI or some other hosted testing solution. The short answer is that most publicly available testing systems require Github authentication and integration.

Additionally, our testing infrastructure is powerful because of its integration with our issue queues. Read the aforementioned The Drupal.org Complexity for more information.

Another reason to run our own testing is scale. To get through all of the core tests for Drupal 8 in an acceptable amount of time (about 44 mins on average), we run very large testbots. These bots have 2 processors with 8 hardware cores. With hyperthreading, that means we have 32 hardware execution threads—about 88 EC2 compute units. They are not exactly super computers, but they are very performant.

We average nearly 18,000 test runs per month. During our peak usage we spin up as many as 25 testbot machines—though usually we cap at 15 bots to keep costs under control. This helps us plow through our testing needs during sprints at DrupalCons and large camps.

We have explored using an enterprise licensed version of either Github or CircleCI with our own hardware to tackle testing. That same consideration has been given to SauceLabs for front-end testing. Right now, there is not a cost savings to tackle this migration, but that does not rule it out in the future. Testing services continues to evolve.

Accelerating Drupal 8

In my first months as CTO, I was told repeatedly that the most important thing for us to work on was testing for Drupal 8. In those early days as I built out the team, we were mostly focused on catching up from the Drupal 7 upgrade and tackling technical debt issues that cropped up. In DrupalCon Austin, I had members of my team learn how to maintain the testbot infrastructure so that we could take over the process of spinning up bots and dealing with spikes in demand.

By early 2015, we had optimized the old testbots as much as they were going to be optimized. We moved them to AWS so we could spin up faster machines and more bots, but there were features that were waiting on the new DrupalCI infrastructure that were blocking key development on Drupal 8.

In March of 2015, we invited all the community developers that had helped with DrupalCI to the Drupal Association offices in Portland and sprinted with them to figure out the remaining implementation needs. The next couple of months involved tweaking DrupalCI's architecture and cutting out any nice to have features to get something launched as soon as possible.
It is no coincidence that from the time of DrupalCI's launch until the release of Drupal 8, progress was rapidly accelerated.

I am immensely proud of the work of all the community members and staff that worked directly with core maintainers to unblock Drupal 8 development and make it faster. This work was critical.

Thank you to jthorson, ricardoamaro, nick_schuch, dasrecht, basicisntall, drumm, mikey_p, mixologic, hestenet, chx, mile23, alexpott, dawehner, Shyamala, and webchick. You all made DrupalCI. (And huge apologies to all those I'm undoubtedly leaving out.) Also thank you to anyone who chimed in on IRC or in the issue queues to help us track down bugs and improve the service.

What's next for testing Drupal

Most of the future state of testing is outlined in the Drupal.org Testing Policy for DrupalCI.

Key issues that we still need to solve are related to concurrent testing improvements and new test types to support. While we have PhantomJS integrated with the test runner, there are optimizations that need to happen.

Testing is not an endpoint. Like much of our work, it is an ongoing effort to continuously improve Drupal by providing a tool that improves how we test, what we test, and when we test.

Comments

hass’s picture

You better BUY some servers and you save a lot of the money to run the tests. Unbelivable how much money you waste with Amazon AWS. Within less than 3 months you can buy hardware with much more horse power and with 5 years 4 hours 24/7 on-site service. A server cabinet at a hoster is also much cheap compare to this AWS costs. What a waste of money. Unbelivable.

mherchel’s picture

Wow. Way to shit on everyone's parade.

I don't know what all went into the decision, but I'm sure there are good reasons behind it. Maybe you could have phrased it like:

Why did you choose AWS? In my experience the cost AWS is much more higher than purchasing several dedicated servers.

In the meantime, here's some constructive feedback for you: http://lifehacker.com/5915687/how-to-give-criticism-without-sounding-lik...

hass’s picture

Everyone using AWS has not made economic comparison calculation and/or has no idea about IT! The jerk is the one who think AWS or in more general Cloud is cool and cheap.

The only situation where cloud is cool is - if you have no idea about your load and you need a server for only a few weeks and you know you do not need it longer. If you know your load (d.o knows) and you know it will be required 5-7years (we all know) than it is always wrong.

Paying AWS 2000,- per month is highly stupid. For 6000,- you get a fat fat server with tons of memory and real core's - not only virtual cores you have no idea how fast they are. 10 times more power and over the course of 5-7 years you safe more that 10-15 times money. I have read after my shocking economic comparison calculation that some people written on the net - AWS has 82% win. That was nearly what I calculated myself.

I know what I'm talking about. It is not possible to prove that AWS is economical useful. Every other server hoster (if you still want to rent) is cheaper, too. At minimum 5 times. I'm not advertising any, doing it in-house is allways the cheapest solution. And the association should turn the money 10 times before wasting money.

Mixologic’s picture

You do not have all the facts, Therefore you do not know enough to criticize the our decision to utilize AWS.

We, d.o., do *not* know what the workload will be. Testing workload is dynamic, and bursty. Sometimes we need upwards of 30 *very large and expensive servers*, and sometimes we need zero.

So the only way that we could support running testing on our own hardware is either to have a gigantic backlog when devs need to get work done (especially during drupalcons, sprints, camps, etc), or overbuy tons of hardware that will sit idle almost all of the time so that we have enough capacity.

The second thing you are ignoring is that the cost of owning your own hardware is not *just* buying the hardware. If you are going to have long running servers, you have to maintain those. That takes staff labor and time. Everytime theres a new shellshock, or heartbleed, or whatever branded security crisis comes around means you have that many more servers to fix/administer. And hardware has failures. Network cards fail, disks fail, power supplies fail. You need staff labor and time to take care of that.
And hardware isnt free to run. You have to pay ongoing costs for rackspace/power/cooling and network bandwidth in whatever datacenter you choose for it to live in. Thats not free either.

There is a concept called TCO, Total Cost of Ownership, which is the only valid comparison. Comparing the hardware costs of a server to the monthly costs of renting on demand servers is an invalid comparison.

hass’s picture

Sure I do not have all facts yet, but I know that you have written some lies here.

  • Everytime theres a new shellshock, or heartbleed, or whatever branded security crisis comes around means you have that many more servers to fix/administer.

    Amazon does not maintain your OS images. You need to maintain them on your own. It is the same like doing it on your own servers.

  • AWS does not save you a single administrator job. You have still the same work to do. Systems are not installing themself.
  • Traffic costs are the same
  • Hardware may fail, but replacing a harddrive takes 2 minutes and it may happens 1-2 times with 50 drives in 5 years. We may need to add 2-3 working hours here, but that's it. No big costs.
  • The admin is still with you as otherwise nobody maintain your servers and you get a shellshock - also on AWS. Nobody protects you from this except you - yourself. Everything else is a lie.

Let's add more facts about the TCO and prices:

  • A full server rack costs 700-1000 EUR per month plus energy in Frankfurt/Germany. In a sum with 3.5KV power and a failsafe gigabit internet connection you may pay 1800 EUR/month. We run full racks under load and we cannot shutdown anything. If you need a half rack or just a server it is also possible for less money. But I'm sure d.o can fill a full rack with their needed hardware.
  • The more VMs you need - the more worse it is with Amazon. I'm sure d.o requires enough servers to fill half a rack.
  • To safe the rack money you can rent hosted servers. There are very FAT machines available for 99 EUR per month or less.
  • Amazon has no firewalls
  • You need to rent a loadbalancer at Amazon. Extra costs.
  • You need to rent disk space. Extra costs.
  • You can easily run 50-100 virtual machines on one hardware. That is what Amazon is selling you! You said you need 30 VMs, that will fit here well. RAM is cheap. Also if you buy a second server for failsafe you are still fine with the costs as you can run other stuff on the server, too.
  • It does not matter if the hardware idles. It is still cheaper to buy.
  • You could shutdown the machines and cores to save some energy if this is really a point. I know we are running 16 blade servers with dual proc and 24 cores per machine at 2.3KV and I have disabled power safing everywhere.
  • I cannot shutdown the central database servers... I guess d.o cannot, too. That means no savings.
  • You have no idea how much power you get from amazon as all the hardware is highly overbooked. You never get the full CPU power...

I made a TCO comparison already on our side and it was 8.5-10 times more at Amazon - only if we do not add more virtual machines - and we will do this. More virtual machines on our own boxes will not cost 1 cent more. On Amazon you pay for everything extra. I tried to calcluate it down, but this was not possible. When I started a 1:1 VM comparison I was at factor +12 with AWS. Network, firewall, ips, traffic, loadbalancers, cables, etc. everything included. High quality hardware, no cheap sh*. Not only servers.

I'm interrested to hear what servers d.o runs (all of them) and how many CPU and disk space this requires. As noted, after 3 months uptime the picture get's worse at AWS - always. I'm sure I can prove this in every constellation. I'm also sure the test machines run more than 3 months a year. You can run a lot of VMs on hardware and the benefit get's better and better the more services and VMs you are running.

If you are a small customer and you only need a normal webserver - you need to be braindeath if you book an virtual AWS server with 2 core at 80 EUR / month as you can I get the same for 8,85 EUR per month at any other hoster including free backup.

Amazon Marketing is *outstaing* best. No idea how people can get sooo brainwashed. But if you are familliar with IT than you know that you get ripped off only - for questionable service, overbooked machines, no garanties, data loss.

Mixologic’s picture

The main thing you do not understand is that the testbots are *not* long running servers. They are throwaway systems. Every test spins up a new instance, on demand, and destroys it when it is done. It is a completely different way to manage that workload, so when a shellshock happens, its not even really that big of a threat, because even if a bot gets compromised, its going to get torn down within an hour. Thats the whole point, we don't actually have to maintain these servers, *because* they are ephemeral. We may, at some point, make changes to the base image that we use to provision new servers, but that only happens once in a while.

We do not need virtual machines, we need cpu cores, and we're maxing out the hardware we're on. So it's completely irrelevant how much ram we can have, or how many VM's we can fit in a machine, because one test suite can completely peg the CPU for 35 minutes on a 32 core machine, to run a single test.

Drupal.org *does not* run on AWS. It has a much different workload characteristic, and yes, it is much cheaper to run that type of workload on our own hardware, which we are doing.

A testing application is wildly different than running a webserver, has different needs, and cannot be compared to webhosting costs. And DrupalCI has *specific* needs that make it a poor fit for using dedicated hardware, and end up costing more while providing less service.

hass’s picture

The main thing you do not understand is that the testbots are *not* long running servers. They are throwaway systems. Every test spins up a new instance, on demand, and destroys it when it is done.

Sure I understand this, but there are always running tests over 24h a day. Do you have statistics when it drops to 0? I'm sure it will drop to 0, but that is acceptable and do not need a destroy. You only destroy the machine because you need to pay for a running machine at AWS. If it's running on your own hardware / VMs you can keep them. You can also destroy them, but it may no longer needed. Destroying them is just cost optimization at AWS.

It is a completely different way to manage that workload, so when a shellshock happens, its not even really that big of a threat, because even if a bot gets compromised, its going to get torn down within an hour. Thats the whole point, we don't actually have to maintain these servers, *because* they are ephemeral. We may, at some point, make changes to the base image that we use to provision new servers, but that only happens once in a while.

I do not get your argument. Before you said you need to maintain this servers because of possible new attacks and now you say this does not matter as you destroy them after runs. You disproved your own argument. Always keep in mind - what you do at AWS can be done at your own boxes, too.

We do not need virtual machines, we need cpu cores, and we're maxing out the hardware we're on. So it's completely irrelevant how much ram we can have, or how many VM's we can fit in a machine, because one test suite can completely peg the CPU for 35 minutes on a 32 core machine, to run a single test.

LOL. You know that you only get virtual machines from AWS? Not only this, you do not get a 32 core machine from them. Ok, it may look like a 32 core, but what you get is just vCORE (*virtual core*). This is not the same. AWS also do not tell us how much Ghz this is. Tons of these 32 core machines running on the same hardware - faking you 32 cores of maybe only 10mhz each - nobody except Amazon knows - and nobody garanties that you have 32 x 2.8 Ghz all of the time.

Drupal.org *does not* run on AWS. It has a much different workload characteristic, and yes, it is much cheaper to run that type of workload on our own hardware, which we are doing.

Great. You could just buy a few more servers, run the test infrastructure on it and safe A LOT OF MONEY. At least you can cover the base load. If a DrupalCon comes up you can add some AWS machines for the 3 hot programming days (<3 months) to cover the non-calulatable extra load. However I'm sure we as developers are also able to wait 30 minutes longer if a system is under heavy load at a DrualCon if this safes you 8-10 times money. You only need to tell this the people that for cost reasons, tests may need to run in off-peak or these "zero" times. You could also schedule them in the queues on project popularity, developers activity, issue queue activity and commit popularity and maybe other factors just to finish more important issues earlier.

A testing application is wildly different than running a webserver, has different needs, and cannot be compared to webhosting costs. And DrupalCI has *specific* needs that make it a poor fit for using dedicated hardware, and end up costing more while providing less service.

I'm sure the load is very high at a DrupalCon, but not unpredictable as there have been a lot of them and you should have the statistics how much load this means. Over the rest of the year you should know how many tests are running over a day and when load is high or not. Are these statistics public?

JvE’s picture

If what you say is true Hass, then you should provide DrupalCI with the infrastructure to run the tests on for *half* the price of AWS and make A LOT OF MONEY.

hass’s picture

In opposite to others, I'm not trying to sell Drupal.org sh** for gold and/or trying to advertise my own products. What I said is true and I can prove this easily.

You just need to make a correct economic comparison with technical background. You only need someone who has skill on VMWare/HyperV/Xen and someone who understands that Amazons CPU power is not 1:1 comparable to normal CPUs from VMWare/HyperV/Xen. An Amazon AWS vCPU is only fraction of a real CPU or VMWare/HyperV/Xen vCPU. And also do not forget these cheap hosters around... it do not need to be named "cloud" to be cool, fast and cheap.

andmair’s picture

Totally agree with your comment @mherchel - the only reason Drupal has been so successful is because it has a strong community. It is easy to have your voice heard and engage in constructive debate in the Drupal community if you deliver your message in the right way. Bullying tactics like this only serve to breakdown trust and stifle legitimate debate. So remember to bring your manners with you in future please :)

imadalin’s picture

Maybe the people involved in the project could tell us what kind of instances are used.

I've worked a while ago on a CI setup using Spot instances and on long term it proved to be best efficient on costs, 5-6 times cheaper than ONDEMAND, alternative used by most companies that need temporary instances.

Also, AWS infrastructure has some additional certifications which are critical to some organizations. It is a step towards helping Drupal Studios when working on governments projects for example (maybe not in all countries is a requirements, but there are cases where even open source software must meet certain requirements).

Also, about maintaining hardware, it's not good for the community as it could raise other useless discussion on how things work. Having a third party like a cloud hoster is preferable and who can offer a true alternative should present it to be voted by people if it's suiting better the community.

hass’s picture

My point was not using AWS/Azure at all. Not any other ONDEMAND instances...

Whatever instance you use at AWS, it is always a lot more expensive than hardware in your own rack. And if servers are too expensive you can also use desktops i7 for high CPU load and a SSD with 5 year guarantee. These desktops are at ~800/900 EUR with 5 years NBD service without the cabinet and power costs that are negligible if there is an existing rack available and if you do not need to make it 100% fail safe. Not that professional and no remote controll cards, but *cheap* - also if they are running 24h a day they are always cheaper than any spot instances and they can handle a lot more traffic.

If Amazon does not provide the instances for free, it is no deal. Community driven projects should safe their money by turning every cent 10 times before spending it on a company like AWS that makes over 80% profit with them! Making 80% profit with an open source project sounds not like a fair deal to me.

Mixologic’s picture

We use cc2.8xl instances. cc2.8xl instances come with 2 Intel Xeon E5-2670 processors @2.6ghz, which works out to 32 cores per machine. We use spot pricing for the servers and usually end up paying in the range of .28-.33 cents an hour. They have 60GB of ram, which we use to put the entire testing infra in memory.

With instances that size, we are essentially guaranteed that there will be no multi-tenancy, and we're not actually sharing them with anybody else, and our performance data bears that out (they are absolutely consistent over time, never variable).

gor’s picture

I would recommend use different type of instances for contrib tests.

For example:
https://dispatcher.drupalci.org/job/default/152039/console

It takes ~4 minutes to process tests

If you use c1.xlarge (8 cpu) instead of cc2.8xlarge - the hour price will drop 3 times, while test time will stay around 15 minutes.

It is affordable time for contrib, due to smaller amount of tests compare to core.

Mixologic’s picture

The main issue with using different instance sizes, is that those test executors can and do sometimes keep executing other tests. After that 4 minutes is up, that testbot is then available for the next 56 minutes to start another test, so that particular testbot may run for 4 minutes, sit idle for another 10, then run a core test for 38 min. and sit idle for another 8 before shutting off.

If they are different sizes, then we can't easily share them between all testing. We'd have to keep the contrib tests on contrib instance sizes, and core tests on core instance size, meaning we'd end up spinning up *more* machines than we otherwise would, even though some of them would be smaller.

gor’s picture

I got your point.

If there are not enough contrib tests to keep one test machine busy, then it does make sense.

To people who is not familiar with AWS billing for EC2:
AWS minimum charge is for one hour. Even if you run your instance for 5 minutes, it charges you for 1 hour.
if you shutdown and start instance again, it will charge you for another hour.

That's why DrupalCI run instances for 1 hour (or a little bit less to shutdown it before 1 hour expire) and then shutdown.

generalconsensus’s picture

This mission to tackle the testing issue is a boon to D8 and to D.O. testing. Huzzah!

gor’s picture

I recently optimized tests for backdrop (kinda drupal 7 fork if you don't know).

So tests run under 5 minutes (3.5 min + start up time) on php7 platform (8 core machine) (it will cost around 0.05$ per test run on https://zen.ci)

You can see PR with code optimization here:
https://github.com/backdrop/backdrop/pull/1366

This optimization allowed to speedup tests at least twice. I did not look to test structure for Drupal 8.
Ask somebody to check it out who is more in touch with D8 tests.

But making it twice fast, mean twice efficient from money perspective.

Mile23’s picture

Re-using the fixture from setUp() means you lose the isolation between tests. This means the results of one test might feed into the results of the next one, leading to an unknown state.

In D8, we're moving towards more unit testing and converting the simpletest tests to our PHPUnit-runnable alternative, which allows you to specify the boot level you need - WebTestBase vs KernelTestBase vs UnitTestBase. This is a much better approach than assuming tests don't need a new fixture each time.

gor’s picture

@Mile23 Please reread code again. It is not re-using. It's caching database structure. So instead of installing core before each test, there is less expensive process - copy database structure. So each test has "fresh" or let say "isolated" copy of database structure.

Mile23’s picture

Right, so if the patch changes the installation procedure and you run a test that bypasses installing stuff in favor of a freeze-dried database, what are you testing?

There's currently an issue for this on d.o, which is probably a better place to have the conversation. :-)

https://www.drupal.org/node/1411074

gor’s picture

Cache get generated at test start. So it based on patch that has been part of test request. Yes there is an assumption that tests does not change installation process.

gor’s picture

jp.stacey’s picture

This is a really clear, readable explanation, that makes a complex architecture quite straightforward to understand.

I'll be showing it to any clients that are interested in not just Drupal's robustness but also in how CI might benefit them too, because it really makes clear how powerful yet explicable it can be.

Thanks for writing it, but more importantly thanks to the CI team for all the work they're doing and have done down the years. It's made a huge difference.

--
J-P Stacey, software gardener, Magnetic Phield

colan’s picture

Just in case it could be useful in the future, GitLab CI is another alternative.

It ties in very nicely with the Use GitLab for all projects idea.

gor’s picture

@colan gitlab is alternative to Jenkins.
You still need to maintain testing infrastructure.

Main part of expenses is instance time to run tests.

Docker based infrastructure allow to delivery required test environment (aka postgre, SQLite, MySQL, php5,7 etc) fast enough and launch new testbot when there is a big queue.

0.15$ per test is not so bad.

I am sure that some tests run much faster, while other takes a lot of time.

Liam McDermott’s picture

Came here to suggest this too! GitLab is unfortunately overlooked, despite being as capable as Github but also Free (as in Freedom).

J.R.’s picture

I am relatively new to Drupal
(I migrated to 8 on may 19th, 2016, from Ning2).

I had some problems and they were solved
by this community (deep bowing!!!).

I wanted to start helping (which in my case
means: publishing my migration (under way)
and .. learning): this post is perfect for me!
I can learn how things here are at Drupal work
and links on more and what I need to learn next.

overwatch’s picture

Is a good information, thanks for share this news.