Our tests on Travis-CI have again gotten close to taking 50 minutes to run, and frequently exceed that limit. This means we basically have to keep re-running errored jobs on working tests, just to see them succeed!

Here's some things that are worth experimenting with:

  • Making sure that APC is enabled and has enough memory. I've never even checked this, but this article suggests that you need to take action to enable it: http://blog.travis-ci.com/2013-03-08-preinstalled-php-extensions/. This has it's own issue: #2441885: Enable APC on Travis-CI
  • Enable CSS and JS aggregation. They aren't enabled by default!
  • Try using Apache rather than the PHP webserver. This could either slow-down or speed up the tests, it's hard to say. This has it's own issue: #2183021: Should Travis-CI use Apache/nginx rather than the builtin PHP webserver?
  • Profile our Behat code and see if improvements can be made. This would most likely be in a @beforeStep, @afterStep, or @beforeScenario. This has it's own issue now: #2447839: Audit Behat code for slowness and optimize
  • Reduce the setup time on Travis-CI. On Travis-CI, we currently spend about 10 minutes(!) just setting up to run the tests. The majority of this time is on running drush make and composer. It'd be really sweet if we could seed the drush cache (ie. populate ~/.drush/cache/download) with a tarball we build with a script and upload somewhere. This should be safe, because if we fail to include something (or the wrong version) drush will just download it normally. I don't know composer well enough to know if we can do something similar there, but I'd suspect so, especially since the versions of the stuff we pull with composer don't change too often.

Things that would help but that I'd like to avoid for now:

  • Switching away from Selenium to ZombieJS or similar. While this would help, I really like that our tests are running in a real browser. I wouldn't want to miss an issue because it needs a real browser, or spend time debugging issues in ZombieJS when it works fine a real browser.
  • Breaking our tests into multiple jobs. This would definitely get us under the 50 minute limit, but we already spawn enough jobs and this would make things even more confusing. I'd also prefer to just be faster, since then we can run more tests locally or on Travis-CI.

Comments

dsnopek’s picture

Issue summary: View changes

Just occurred to me that I don't know if CSS/JS aggregation are enabled by default or not..

cboyden’s picture

Would it be possible to accomplish some of the non-Javascript tests at a lower level - maybe with PHPUnit?

dsnopek’s picture

dsnopek’s picture

Issue summary: View changes

It turns out that CSS and JS aggregation are not enabled by default! Running a test build on Travis-CI with them enabled to see if there is a performance improvement:

https://travis-ci.org/dsnopek/panopoly/builds/52272258

dsnopek’s picture

Would it be possible to accomplish some of the non-Javascript tests at a lower level - maybe with PHPUnit?

I don't think so. PHPUnit is best for unit testing, but we aren't testing "units of code" (ie. a single PHP function at a time), but whole pieces of functionality (functional testing). We could do functional testing with PHPUnit using the Mink API directly, but I really don't think that gains us much over Behat.

We could try to get more of our tests running without the @javascript tag? That would definitely allow those tests to run faster!

However, so many of our tests are for things in the configuration of widgets which pretty much requires Javascript to get to that point...

dsnopek’s picture

Issue summary: View changes

Added some ideas about reducing setup time on Travis-CI.

dsnopek’s picture

Title: Improve test performance on Travis-CI » [META] Improve test performance on Travis-CI
cboyden’s picture

One thing that might help is writing some custom widget steps. If what we're testing is not the creation of the widget itself, but something else like its location or searchability, then this would avoid having Behat click through the IPE to create it. I'm thinking of search.feature here, which spins up a browser to test that panelized content is indexed and searchable. This approach could also be used to test things like changing layouts.

dsnopek’s picture

@cboyden: That's a great idea! Also, if we can get the widget on a Panel without needing to do it in Selenium, we might be able to access the edit form via the "nojs" link, and do some of our widget editing tests in Goutte which would run much, much faster.

dsnopek’s picture

Category: Task » Plan