No test coverage for the frontend.

Proposed resolution

Use Behat.

Discussed this with Jesse Beach, nod_ and Alex Pott yesterday at Dev Days (Szeged) and we strongly feel we should use Behat to get functional JavaScript test coverage in core. It gives us access to real browsers and phantomjs via mink, the tests are nicely self documenting with gherkin, lots of core developers are already familiar with it via client projects, it's used on etc. etc.

This wouldn't be a replacement for SimpleTest, but likely all the SimpleTests using the browser could be moved over.

There will be things we can't do with Behat, but we can tackle those separately.

Quote from catch's comment at #237566-159: Automated JavaScript unit testing framework.

I'd like to keep this issue clean and at a reasonable amount of comments. Let's file follow-ups for specific topics that needs to be agreed on, that worked very well for the JS meta clean-up.

Realistically we can't get started until we get at least a couple examples of implementation. I'll be talking with a few people this week already, I know jessebeach is as well. If you have a project using behat, please reach out to your closest JS maintainer to talk about it.

Remaining tasks

  1. Get feedback from projects using behat in the wild and sum that up somewhere.
  2. Reach out to Behat developers to see if/when they can help us out.
  3. Agree on how to implement this for core and contrib (where are test files, standards for writing tests, etc.).
  4. Make d.o testbot run behat tests

User interface changes


API changes

Probably none, only additions.

People and groups involved Infrastructure Working Group Developer Tools Team Community Tools Leadership Team

#77 interdiff.txt14.42 KBm1r1k
#77 behat-2232271-74.patch34.53 KBm1r1k
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 90,532 pass(es), 7 fail(s), and 0 exception(s).
[ View ]
#64 behat-2232271-63.patch33.39 KBscor
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 74,821 pass(es), 16 fail(s), and 0 exception(s).
[ View ]
#51 behat-2232271-50.patch35.5 KBgrasmash
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 72,672 pass(es), 16 fail(s), and 0 exception(s).
[ View ]
#46 behat-2232271-46.patch45.58 KBgrasmash
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 72,188 pass(es), 61 fail(s), and 0 exception(s).
[ View ]
#7 1._vagrantlucid32_varwwwd8_ssh_20140403_160828_20140403_160841.png82.82 KBdstol


fgm’s picture

I do have projects using Behat, but at the "whole site" level, not individual modules.

We've been extending the DrupalContext provided by DrupalExtension, as well as extending the Behat Buzz-based WebApiContext

FWIW, (a) Behat 3 is almost launched and changes things a bit, (b) Buzz, which supports the WebApiContext it not very much alive and mixing Mink-derived steps with WebApi-derived steps doesn work (c) The WebApi context itself uses old-style includes, and already needs some tweaking to work with current practice.

All in all, with Rest getting such a larger role in current practice, we should use steps similar from the WebApi ones, but based on the Mink.

Not sure where this can take us regarding the larger issue you describe.

catch’s picture

We use Behat for several client projects at Tag1 as well, and also started a proof of concept for automated performance testing using it a while back.

Per it looks like 3.0 supports running multiple suites at the same time, that would let us do $module/features with their own scenarios and context classes, as well as whatever site/profile level tests we'd want to ship with. Even without that we could collect tests outside behat and run them individually.

Two issues to open, but just putting notes here for now:

We should probably at least open an issue to look at porting DrupalExtension to 8.x/core. There's some things in there we probably can't rely on in core, like drush, but that's for the issue to sort out.

Also how to deal with the set up of the tested site, i.e. we'll need to have a clean install, but do we do that per scenario? per feature? Could we make it optional via a tag? Behat already has the events to handle this so it's mainly a choice on our side.

nod_’s picture

jessebeach’s picture

jessebeach’s picture

Issue summary:View changes
dstol’s picture

nod_ pointed me to this issue.

I'll note that this might lend an opportunity to remove the vendor directory from core. I would assume we would add drupalextenion, behat, mink, etc to require-dev. Granted, there is some overlap with core, but just adding composer.json below to an empty folder creates 12 mb of libraries, which might be fine, worth some discussion.

  "require-dev": {
    "drupal/drupal-extension": "*",
    "behat/mink-extension": "*",
    "behat/mink-selenium2-driver": "*"
dstol’s picture

Updated #2229747: [d8] settings() has been removed and VERSION constant removed and was able to run a really simple behat test against 8 with no issues.

jhedstrom’s picture

We should probably at least open an issue to look at porting DrupalExtension to 8.x/core. There's some things in there we probably can't rely on in core, like drush, but that's for the issue to sort out.

Drush isn't mandatory, just one of the available drivers for communicating with Drupal. The Drupal driver would be the way to go. Drush is used for the testing of the extension itself (see .travis.yml, and the bundled Behat features).

I'd be interested in discussing an approach where the entire extension isn't forked and placed into core, but subcontexts for individual modules are created and put into core.

Also how to deal with the set up of the tested site, i.e. we'll need to have a clean install, but do we do that per scenario? per feature? Could we make it optional via a tag? Behat already has the events to handle this so it's mainly a choice on our side.

I'd really like to not see a clean site per scenario. Even per-feature seems overkill for this level of testing. That isn't to say that tests should rely on data created from previous scenarios, but if a reasonable effort is made for scenarios to clean up after themselves (the Drupal Extension currently clears out any users, nodes, and terms created during a scenario), then we can avoid re-installing for every feature, saving a tremendous amount of time spend on test runs.

larowlan’s picture

Had a lengthy discussion with nod_ about this - here are the main points

  • At PreviousNext we chose the zombie driver over selenium/phantom because of speed concerns
  • The Zombie driver works by creating a Nodejs server (a .js file) in a temporary directory and creating a connection to it. The server contains a global browser variable which is an instantiated zombie browser object and has access to the stream via an argument.
  • All of the Behat\Mink\Driver\DriverInterface methods are translated into JavaScript commands that are sent to the server. These interact with the global browser variable. Each command is stored in a buffer and then when stream.end method is invoked, are executed (using eval in the node JavaScript).
  • The window object is available to your step-definitions and as such you can execute JavaScript and use jQuery etc to trigger events.
  • The Zombie implementation has no concept of a viewport so resizing, scrolling etc aren't available, however depending on how the JavaScript under test is architected, you can use jQuery to trigger similar events if needed.
  • The main drawback experienced with Zombie is internal limits. It seems to be limited to the size of the request it can parse and/or the time in which the server takes to respond. For slow performing pages/sites we've had to ensure the cache is enabled, CSS and JS aggregation are turned on and the cache is primed for the given path for Zombie to work. When it doesn't work, it fails silently with an empty dom.
  • In terms of JavaScript testing I think there is an intermediate step we can take, we don't need Behat to use the Selenium/Zombie/Sahi et al drivers. To this effect I've added #2232861: Create BrowserTestBase for web-testing on top of Mink. This opens the door to Step 2: Add JavaScriptTestBase which uses a JavaScript capable Mink driver. My concern is that the Behat road is a long one and we could find ourselves twelve months down the road with no Behat, but also still no JavaScript testing. The Mink path is actionable now and also modernises the guts of WebTestBase in the process.
  • In terms of cons of Behat, they've been touched on already by @jhedstrom - isolation, isolation, isolation. I second the 'clean up after yourself' notion but just don't see us replacing simpletest in entirety without some form of isolation and so I still think that leaves a place for TestBase's notion of a child site. Although we're making great progress in #2016629: Refactor bootstrap to better utilize the kernel which should hopefully open the door to HttpKernelTestBase.
  • The other issue we've experienced with Behat in general is timeouts, but thats tuneable.
  • In terms of adding our dependencies I agree - we need to pivot away from having /vendor in our repo. If you get the tar.gz, the packaging script can add those dependencies. If you're fetching with git, you can run composer. This also applies to the node modules required by Zombie etc. We add a packages.json/lock to our repo and thats it. Those who want to use the JavaScript testing framework need to worry about installing node and npm, these don't become a hard-requirement for Drupal. Our testing infrastructure then includes a composer install and a npm install. We make the node_modules configurable via settings, but default it to DRUPAL_ROOT/node_modules. If you're using global installs of node modules, (like testbot might), you can configure the path in your settings.
  • In terms of running and collecting results, I see the docker work done by @jthorson, @nick_schuch, @ricardoamaro and @beejeebus et al in #2161175: Plan/create a Docker container for pifr deployment make management and maintenance easier has a next step of jenkins integration so I would advocate that we work towards a formal build process in core. This would entail a build.core.xml for consumption by Phing. Testbot installs Drupal in a subdirectory (drupal) so at the top level (outside /drupal) it could include a build.xml that imported build.core.xml definitions. These definitions would include build commands for testing simpletest tests, behat test, phpunit, automated visual regression testing - and anything else needed. All of these can produce output in junit format for consumption by Jenkins. With the Jenkins console, we could watch test runs as they execute - neat!
catch’s picture

Drush isn't mandatory, just one of the available drivers for communicating with Drupal. The Drupal driver would be the way to go. Drush is used for the testing of the extension itself (see .travis.yml, and the bundled Behat features).

Yes, this means we'd add the Drupal driver to core, but the drush driver would stay in contrib.

I think we can handle isolation via a @tag on the scenarios, at least to start with. We know it's possible to do both isolated, and not-isolated tests, so we can figure that out later IMO.

I've personally used selenium+phantom but don't have a preference for that vs. zombie. The advantage of behat is apart from the test running environments and limitations in the browsers themselves, it's very little effort to switch.

Thanks for opening #2232861: Create BrowserTestBase for web-testing on top of Mink, following.

dstol’s picture

What's the need to refactor the Drupal driver out of DrupalExtension and plop it into core? I don't understand the benefits vs vendoring.

catch’s picture

Core APIs change very, very frequently. We also have a requirement that 100% of core tests pass all the time (i.e. a failing test prevents commits).

That means that any change to core that would break the Drupal driver, would either need a simultaneous commit to the driver, or some sort of bc layer added. This applies to both the driver and especially DrupalExtension.

While 8.x has added a lot of external dependencies to core, we currently don't have any core dependencies where core counts as 'upstream' pulled in via composer.

grasmash’s picture

I'd like to attempt to create a brief overview of the topics that needs to be discussed and the decisions that need to be made before Behat can become fully integration with Drupal, both on a core and on a contrib level.

  1. Directory structure

    We need to define a standard for the placement of various required Behat files. This includes:

    • The BehatExtension. Questions: Should this be housed in core? If so where? Should it be downloaded separately with vendor files?
    • Subcontext files for each module. Questions: Where in the module directories should these be stored? What should the file naming convention be? Should we use PSR-0/4?
    • A yml config file. This may be required per-module and per-site (in a multisite install). Questions: should we define the Behat settings in existing module yml file?
    • .feature files for each module.
  2. Implementation of subcontexts

    We need to define standards around how each module creates a subcontext, which should contain custom step definitions. Suggestion: Every subcontext should be an extension of the main DrupalContext.

  3. Behat + Drupal bridge

    We need a tool that will aggregate all of the various Behat configuration settings and files provided by individual modules and provide them to Behat. A few requirements that come to mind:

    • Behat needs to know the locations of all .feature files provided by various, enabled modules
    • Behat needs to know the locations of all subcontexts provided by various, enabled modules
    • Behat needs to know the yml config settings provided by various, enabled modules
    • Behat should be aware of global and site level (e.g., in multisite setup) yml config settings
    • A drush command or a UI should be available for executing and perhaps viewing the status of these test
    • We should consider how to integrate this with various build tools, like Jenkins and Travis CI
  4. Session Driver

    We need to determine which session driver will be used by default for Javascript Behat tests. Some available options are: Selenium2 and Zombie.

  5. Determine details around implementing TestBot for these tests
grasmash’s picture

I've started a sandbox fork of D8 that introduces Behat testing into core. It is not yet functional, but if you're interested in commenting or contributing, it's here:

fgm’s picture

Item 2 is no longer an issue: subcontexts are going away in Behat3, which is just a few days away from release at this point. The new suite-oriented Behat configuration may simplify our work in this regard.

Also, note that the WebApiContext I mentioned earlier is now a separate component, being completely redone for Behat 3.

grasmash’s picture

I was recently patching the Drupal Behat Extension to work with D8's new session management functions, and got into a conversation with sun regarding the Behat Extension's Drupal driver, which bootstraps Drupal and manually opens a session via the Drupal API. He brought up a good point:

The moment you start a session in a test runner, you open the door to insanity (a.k.a. Simpletest's WebTestBase), because you're duplicating state between the test runner and the site under test... Simpletest has actually matured a lot in D8, but that duplication of environments still exists, and is one of the last things that causes plenty of problems... I'd highly recommend to not enter that path.

As it applies to the Behat Extension, this comment suggests that we should not bootstrap Drupal at all. That would mean not using the Drupal driver. All interactions with Drupal should occur via a RESTful interface. To quote sun again:

a BDD test should exclusively log in via the user login form only. Likewise, creating a user role, is equally POST /admin/config/people/roles/add, etc.pp. This kind of testing should have a _strict_ separation between the test runner and site under test + no shared state at all.

The primary argument against this approach, as I see it, stems from performance concerns. It's faster to create entities with a simple entity_create() call than it is to make a GET request for a form, fill it out, and POST data back. However, this could be somewhat mitigated by sharing fixtures (set up steps) between scenarios in a given Behat feature.

performance was the exact (misguided) idea that made us call API functions in the test runner (instead of performing requests) + inherently introduce a shared state between the test runner and the site under test due to that, which turned into a utter freakin' PITA over time.

This seems like an important topic to discuss before going to far with integrating Behat into core. Any thoughts?

moshe weitzman’s picture

I think it is OK to call into Drupal's API to do setup work like create entities. I dont think we have any choice. The REST API didn't get around to supporting all field types in core, and contrib will be worse. For example, we can't yet populate image/file fields using REST API.

I agree with Sun that all checking of functionality should happen via login form, link clicking, etc. I also abhor the shared state problem in Simpletest that Sun refers to. I just think we are falling into the same trap here. I guess I should look at the Drupal driver a bit closer.

fgm’s picture

Relying only on the UI is a problem because once outside standalone core development, in actual site projects, themes can build screens differently, and make the core tests fail. The Behat extension currently addresses this by allowing the redefinition of a small number of strings or selectors to customize expectations, but to use Behat on a larger would require this sort of customization to be much more extensive.

Also, BDD is not just for UI, but also applies to web APIs, where the "customer" expectations are different, and tests are not expected to rely on a HTML UI, but (typically) only on JSON or XML exchanges. The interesting thing is that these are independent from theme, and can probably be independent of language too (another problem with UI-based tests), and can provide a more solid, and possibly faster, basis to build test contexts (loading fixtures), than working through the UI steps. That's what we did in a recent project, BTW.

In a typical Given/When/Then sequence, the three predicates must be expressed at business-level, but there is no reason why they would have to be implemented as such, especially (IMHO) for the Given, which is a starting point from which actual checks are done, and should probably be done as efficiently as possible, while the other two are more representative of the actual behaviour to be tested. Of course, the fact that the steps implementing G/W/T are exactly the same may make things a bit difficult.

jessebeach’s picture

Awesome, thanks for starting the sandbox grashmash!

We want to work in such a way that will allow us to easily propose changes to Drupal 8 core when this project reaches a state suitable for incorporation. So, let's make sure that any changes to core itself are associated with an issue and that that issue is tagged with Behat. Once we do reach the point where it becomes necessary to patch core, let's start a distribution project so we can track the patches.

grasmash’s picture

@jessebeach I've actually already modified core in the sandbox. A step has been added to the installation process that configures the necessary behat.yml file.

sun’s picture

@fgm: "UI" was not strictly meant to mean "HTML UI" in #16. Focus was on avoiding shared state (which implicitly means that booting a parallel Drupal and calling API functions should be avoided) and thus keeping the test runner strictly separated from site under test.

In fact, in an ideal world, all fixtures would be set up by performing RESTful HTTP requests (via REST module). — However, I was told that the RESTful server capabilities in core do not support config entities yet (which probably present the largest chunk of fixtures). I don't know how easy or hard it would be to resolve that blocker.

jessebeach’s picture

grashmash, do you plan to do the bulk of the integration work in a module? Does that module exist yet?

grasmash’s picture

@jessebeach I do not plan to house the initial integration in a module, although there may be a need for a module later on if we'd like to provide a UI. Currently, the plan is to for Behat code to live in two places: 1) in core .and 2) in the DrupalExtension library (which is not a module).

I've currently implemented the following additions to core:

  • core/behat.yml (required config file)
  • core/tests/features/Installation.feature (an example Behat test)
  • core/tests/features/bootstrap/FeatureContext.php (required Behat class)

In order to properly follow PSR-0/4, we may need to move and rename FeatureContext.php. has also been modified so that it sets up the proper configuration in behat.yml during the install process.

I've also modified composer.json so that it downloads a couple of required Behat libraries, including the DrupalExtension, which will live in core/vendor.

grasmash’s picture

@moshe @sun

We may not actually need a REST API in core in order to avoid the shared state problem. Behat can make GET and POST requests to the UI via the Goutte driver (which leverages Guzzle). This will work, but has two distinct disadvantages: 1) This will significantly slow down testing given that each set-up step will need to request fully-rendered pages from Drupal. 2) This will a LOT more legwork to modify the existing DrupalExtension library, which is already functional using the bootstrap approach.

moshe weitzman’s picture

As I said before, I think it is fine to call into Drupal to do the GIVEN stop like setting up entities. It is just the WHEN/THEN steps that should be indepant of Drupal. I don't see how this violates the 'shared state' problem that Sun mentions. For starters, there is only one Drupal (at least in my head). The test runner is just the `behat` or `phpunit` unix programs. If we are talking about simpletest being the test runner, I'm no longer interested.

grasmash’s picture

@moshe I'm inclined to agree with you, and I've already implemented the DrupalExtension so that it bootstraps Drupal and uses the existing GIVEN steps that utilize the Drupal API. Just wanted to make sure that all of the angles were considered.

jessebeach’s picture

re: #23, the Core modifications will be small and non-clashy, so in the interest of reducing friction, let's leave this as a Drupal sandbox for now and just make sure we merge in 8.x branch commits often. That will allow us to merge the change back into the 8.x branch later without conflict.

We can use the sandbox as the repository for an initial set of tests as well. Given a couple examples, I'll invite a wider audience to come in and propose patches to add coverage to features.

larowlan’s picture

mtdowling pointed out on twitter that goutte which uses guzzle 3, we're using guzzle 4. I'm working on upgrading goutte to guzzle 4 as part of #2232861: Create BrowserTestBase for web-testing on top of Mink

jessebeach’s picture

What is the smallest step we can take now to allow us to start writing tests?

Could we work grasmash's sandbox into core in an inert way that developer's can leverage locally? This would allow us to start populating core modules with feature tests even though they won't be run on testbot. Then, as the Mink work matures, we can swap that in and hook up with testbot (/me waves wand).

The lionshare of the work will be writing feature tests and the sooner we can start with this work, the more coverage we'll have when we finally run this on the integration level.

catch’s picture

With simpletest, we moved the contributed modules and it's tests (some of which had code comments in Polish etc.) directly into core wholesale.

Many of the tests were failing, or were broken by changes to core after they were added. Also there was no testbot support (at least patch-level testing) for some time between the core move and enabling it.

I'd personally be fine with doing similar here, but with one caveat:

Once it's feasible to enable test bot support for behat, we should do so. This means we need to be able to have 100% passing tests when that happens. A small set of passing tests that we can report back to issues when we break them, is a lot better than a large set of failing tests that we can't report on. Also trying to fix issues without automated marking of patches CNW is a losing battle given the rate of core commits, and it's worse the more tests there are. If we have lots of coverage that's OK, as long as we can easily disable failing tests.

On the Drupal driver. I personally would prefer to use REST and fixture sharing if we can. For one thing this would allow for using the core behat framework to run tests on non-local sites for client projects etc.

However given it works, I think using the Drupal driver for setup only is a good start and we can see how it goes down the line - as long as we don't switch between the drupal driver and mink back and forth it shouldn't be an issue with assertions.

jessebeach’s picture

After some discussion with catch, I think it's necessary to define the goals here:

1. Replace simpletest with Behat/(Mink|Goutte)
2. Use Behat + ?? for JavaScript Unit Tests
3. Use Behat + Selenium (or some other JS enabled extension) for JavaScript feature tests

Given that Simpletest works right now, replacing it is less of a concern compared to the zero coverage we have for JavaScript in core.

So, when I mentioned before, what is the smallest step we can take to start writing tests, I meant feature and unit tests for JavaScript. It's my understanding that grasmash's work in the sandbox will allow us to start writing feature tests for JavaScript.

My concern is that this work not conflict with larowlan's efforts to replace simpletest with a Behat/Mink combo.

larowlan’s picture

The approaches are compatible.
Regardless, I'm blocked since we went to guzzle 4. You might be in a similar boat with the goutte behat driver.

Berdir’s picture

Behat with Mink/Goutte is working fairly well with @larowlan's goutte port, my mink goutte driver port (linked above) and a bunch of additional patches and hacks. You can even configure your composer.json file to pick up our ports automatically like this:

    "require": {
        "drupal/drupal-extension": "*@dev",
        "fabpot/goutte": "dev-Guzzle4 as 1.0.1",
        "behat/mink-goutte-driver": "dev-Guzzle4 as 1.0.9",
        "symfony/dom-crawler": "2.4.*@dev"
    "repositories": [
            "type": "git",
            "url": ""
            "type": "git",
            "url": ""
    "config": {
        "bin-dir": "bin/"

(We have that in a module and use the composer_manager module to pick it up)

A few pieces are not yet available, the mink-extension package also needs some changes, haven't figured out what to do with that as it's an old version.

grasmash’s picture

I've updated my sandbox, which is now functional. I've also added a screencast that provides a walkthrough of the changes and instructions for running the behat tests locally.

@Berdir @larowlan I have not encountered any issues with Guzzle 4 as of yet. However, I'm open to utilizing the Mink/Goutte port.

grasmash’s picture

Mosche pointed out that this approach adds a new requirement that the core directory be writable at install, which nobody (myself included) wants. A couple of alternative:

  1. Move core/behat.yml to sites/default.yml
  2. Modify such that drupal_root and base_url are passed to behat at runtime (no need to write to behat.yml)

I've made a branch called no-install-step, which implements approach #2. However, I'm not certain which option is better. A quick comparison (please add your thoughts):

Keeping behat.yml in /core:


  • We don't need to add anything to


  • Users needing to override behat.yml don't have a good example of how to define drupal_root or base_url in a yml file

Moving behat.yml to sites/default


  • Users have a good example of how to customize behat.yml
  • Our bash script does less and makes no assumptions. All values are explictly defined


  • We need to modify install.core.php
pbuyle’s picture

Note: The following is a copy/paste with liugh edit of comment #158 in #237566: Automated JavaScript unit testing framework as requested by @nod_.

Unit testing, integration testing and validation testing testing are three different beasts, looking for a single solution for all three of them is doomed to failure (or at least ugly hack and workarounds the limitations of the choosen tool in order to use it iun a way it wasn't designed for).

QUnit is a unit testing framework for JavaScript, like PHPUnit is for PHP. So if what is needed are integration and validation tests, something more than QUnit is indeed needed. IMHO, for unit testing in JS, neither a browser or an HTML page are required, but you may need to mock some DOM objects (ie. you don't run your tests over an actual site with real pages). For instance, a properly designed Drupal behavior should be testable by calling it over a set of fixture DOM elements (ie. context of the behavior) and asserting the wanted changes of these elements or their children.

Being a BDD framework, Behat is a more a validation testing framework. It tests an complete web application as a blackbox, not its individual components. So it could be used to validate that the JavaScript on a standard Drupal install works as expected. As a bonus, Behat is also usable to test behaviors defined by both PHP and JavaScript at the same time. An example of validation testing would be testing that a specific Views admin scenario create the expected view.

That leave us with integration testing, which is hard to implement over code that is not designed as loosely coupled components. Unit testing has the same kind of issue, but while it is still possible to unit test individual functions and classes by mocking their (hidden) dependencies, integration testing requires a known logic to assemble components together in order to be able to tests the various integration scenarios.

My understanding is that event in D8, Drupal's JavaScript code is not designed as components (even tightly coupled), making integration testing nearly impossible to implement. So focus on both unit testing (which the original issue #237566 was all about) and validation testing (which seems to be to point of this issue) will be more rewarding. Both will encourage more discipline and better quality in the JavaScript code, which should eventually make it possible to implement integration testing.

pbuyle’s picture

Given my comment in #36, the idea of using "Behat + ??" from @jessebeach in #31 for unit testing is, IMHO, a bad one. Even if using Behat for unit testing does not create issues, it will continue to blut the line between the three kind of testing and fail to encourage developer to design code with lously coupled component.

sun’s picture

@mongolito404: Well said. Nice summary on concepts. We might want to consider to incorporate that almost literally into core API docs.

larowlan’s picture

Because we went guzzle4 looks like we might need to use behat 3. Which will mean refactoring.

fgm’s picture

FWIW, here is the temporary location for the Behat 3 docs, until the main site is updated to cover Behat 3 instead of 2.5:

larowlan’s picture

pbuyle’s picture

@larowlan: Jest is a unit testing framework (which look awesome, if only because it mocks the DOM). This issue is about using Behat for functionnal testing. Unit testing and functionnal testing (which I prefer to name validation testing) are different things, have different needs and are better served by different tools.

larowlan’s picture

@mongolito404 yes I'm aware but #237566: Automated JavaScript unit testing framework was closed in favour of this, so this is our de-facto place for listing stuff.

benjy’s picture

As mentioned here

Codeception could be another alternative to Behat.

rcross’s picture

+1 for some review/discussion of codeception as an alternative to Behat.

grasmash’s picture

new45.58 KB
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 72,188 pass(es), 61 fail(s), and 0 exception(s).
[ View ]

Here is a first stab at integrating Behat with Drupal 8 core.

Some notes on the patch:

  • A new Behat module exists at core/modules/behat
  • A new script exists at core/scripts/
  • A new dependency has been added under 'require-dev' in composer.json, but the packages are not included in this patch
  • Feature tests have been written for Quickedit and Toolbar (thanks jbeach!)
  • There are still a few todos left in this code, though the base functionality is present

Some notes on the module:

  • This module will search for all .feature files located in /src/features for all enabled modules and register them as entities, to be executed via the script
  • Custom FeatureContext.php files may be defined in /src/features/bootstrap for each implementing module
  • Drush commands have already been written for this module, although they are not contained in this patch

To test this:

  • Apply the patch
  • Install Drupal
  • composer update "drupal/drupal-extension"
  • Enable the behat module
  • Update public://behat.yml with your local machine's base_url
  • Start selenium server
  • cd core/scripts
  • ./
grasmash’s picture

Status:Active» Needs review

Updating status.

Status:Needs review» Needs work

The last submitted patch, 46: behat-2232271-46.patch, failed testing.

sun’s picture

Didn't review the patch, but want to raise early: The files should be located in ./tests/features/*.feature - i.e., within the ./tests directory of each module and outside of ./src directories as those are reserved for PHP code.

grasmash’s picture

Status:Needs work» Needs review

Resubmitting patch. I've removed views integration for now (since it was causing testbot failures) and I've moved .feature files to tests/features at sun's suggestion.

Unfortunately, a recent change to session handling in D8 core requires a few changes to the drupal-extension library before this can be merged. We'll need to wait for 1.0.3. This commit will fix it:

grasmash’s picture

new35.5 KB
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 72,672 pass(es), 16 fail(s), and 0 exception(s).
[ View ]

Aaaand I'm attaching the patch.

Status:Needs review» Needs work

The last submitted patch, 51: behat-2232271-50.patch, failed testing.

grasmash’s picture

Well, 16 failures errors is better than 61!

The following error occurs due to the fact that I haven't actually committed the new library, just added it to composer.json.
Class 'Behat\Gherkin\Keywords\ArrayKeywords' not found
This may be causing the module enable to fail, therefore triggering more test failures.

What's the proper procedure for updating composer dependencies in core patches?

nod_’s picture

You need a dedicated issue for adding the lib so that Dries can review and commit.

grasmash’s picture

grasmash’s picture

Dave Reid’s picture

Curious if we'd be able to use something like this to test interactions with more complex JavaScript, like interacting with CKEditor?

dsnopek’s picture

@Dave Reid: We use Behat to test TinyMCE in the Panopoly test suite. ;-) I think it should be possible to test CKEditor as well!

xjm’s picture

Title:[Meta] Use Behat» [Meta] Use Behat for JavaScript testing
Priority:Normal» Major

This would be huge.

benjy’s picture

I posted some concerns for discussion with Behat here:

JS testing would be huge, I agree with that.

pbuyle’s picture

@xjm: The title is misleading now. Behat is not restricted to JavaScript testing and is not a generic solution for JavaScrip testing (for instance, Behat is not the appropriate tools for JavaScript unit testing). Also, Behat can be used to test the result of PHP, and even more importantly, to test the integration of JavaScript and PHP code. See my comment in #36.

IMHO, if Behat is to be used (which would be great), it should not be discussed and designed as a JavaScript testing solution.

nod_’s picture

Title:[Meta] Use Behat for JavaScript testing» [Meta] Use Behat
Issue tags:+Needs issue summary update

Indeed using behat had a nice side effect of allowing us to test JS but it wasn't restricted to it as the related issues show.

I guess the issue summary should be updated.

moshe weitzman’s picture

Title:[Meta] Use Behat» [Meta] Use Behat for validation testing
Assigned:Unassigned» moshe weitzman

Retitle based on #36. We will test both PHP and javascript code to verify that it accomplishes its goals at a high level.

I hope to reroll this patch soon.

scor’s picture

Status:Needs work» Needs review
new33.39 KB
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 74,821 pass(es), 16 fail(s), and 0 exception(s).
[ View ]

I had to update drupal/drupal-extension to get a working install of behat, a fix a few things in the patch. I'm attaching a reroll here.

Status:Needs review» Needs work

The last submitted patch, 64: behat-2232271-63.patch, failed testing.

clemens.tolboom’s picture

I assumed this was about Behat 3? What is the reason to use Behat 2 instead of Behat 3?


Drupal Extension 1.0 supports Behat 2.4, and Drupal 6 and 7 (with Drupal 8 support being backported as it changes). Drupal Extension 2.0 aims to work with Behat 3, and focus on Drupal 8.

As Behat 3.0.0 is out since 2014-04-20 shouldn't we use it? Behat 2 latest isn't touched since 2014-04-26.

kostajh’s picture

I came to this issue via #237566: Automated JavaScript unit testing framework. I'd like to know if there is any interest in using CasperJS instead of Behat for front end testing. In my experience, CasperJS is a much better tool than Behat (at least the 2.x version) for testing JavaScript interactions, and is also a lot faster than Behat. It's easy to install and learn. It doesn't have a Gherkin style syntax for writing scenarios but this is somewhat mitigated by adding comments at the top of each test file explaining what is happening.

Drush integration is also possible for CasperJS.

I'd be happy to provide proof-of-concept code, and work to get this implemented, if there is interest.

attiks’s picture

A benefit of using mink/behat is that you can do more than just frontend testing, you can control the back end using the api.

Another benefit us that it can control a lot of browsers.

kostajh’s picture

@attiks CasperJS can interact with the backend via Drush, which I think would be sufficient for this scope. It can also drive WebKit (via PhantomJS) or Gecko (using SlimerJS), which I also believe would be sufficient for what we are discussing here.

attiks’s picture

##6 I know CasperJs, but it is limited to those 2 browsers, and I hope that once we have this system for core, we will use it to test on as much browsers as possible.

There are/were to many javascript bugs in Drupal 7, most only showing on some combo of browser and OS. Since Drupal 8 contains even more javascript we (developers as well as users/end users) can only benefit of advanced front end and integration testing.

You're right about the unit testing part, we need to handle that as well, but there are other tools that we can use.

PS: We use a combination of jenkins, behat and phantomjs to do automated frontend testing.

kostajh’s picture

... I hope that once we have this system for core, we will use it to test on as much browsers as possible. ... There are/were to many javascript bugs in Drupal 7, most only showing on some combo of browser and OS.

@attiks that's a very good point and this alone probably outweighs the benefits of using CasperJS.

pbuyle’s picture

@kostajh Please see #36, #61 and #63, Behat usage is not for considered for JavaScript unit testing (as reflected in the issue title).

RoySegall’s picture

Well.. I don't know what the status of this issue but i have something that may help this issue -

chx’s picture

What happened to #44 and #45 mentioning codeception? See . Although it's still PHPunit :( but it's PIE. Also, should I repost here my concerns over the whole testing world becoming phpunit based from #2469731-5: Document when to use BrowserTestBase or is linking enough or what's the best place to talk about this?

pfrenssen’s picture

#2469731: Document when to use BrowserTestBase is probably sufficient for now, we don't need to discuss this in two issues.

chx’s picture

@pfrenssen do you see any replies to those concerns there? I don't.

m1r1k’s picture

Status:Needs work» Needs review
new34.53 KB
FAILED: [[SimpleTest]]: [PHP 5.4 MySQL] 90,532 pass(es), 7 fail(s), and 0 exception(s).
[ View ]
new14.42 KB

Here is rerolled patch with DrupalExtension 3.x (because obviously 1.x is outdated and pretty limited to huge projects when separate independent contexts are required)

Status:Needs review» Needs work

The last submitted patch, 77: behat-2232271-74.patch, failed testing.

m1r1k’s picture

All errors are missed library related, should I add libraries to the repo to check tests?

clemens.tolboom’s picture

  1. +++ b/core/modules/behat/config/behat.yml.dist
    @@ -0,0 +1,30 @@
    \ No newline at end of file

    No newline at end of file

  2. +++ b/core/modules/quickedit/tests/features/Quickedit.feature
    @@ -0,0 +1,13 @@
    +  ¶


m1r1k’s picture

Issue tags:+drupaldevdays
yannickoo’s picture

Just a quick question to the patch developers:

  1. +++ b/core/modules/behat/behat.install
    @@ -0,0 +1,26 @@
    +  $behat_conf = Yaml::parse(DRUPAL_ROOT . '/' . \Drupal::moduleHandler()->getModule('behat')->getPath() . '/config/behat.yml.dist');
  2. +++ b/core/modules/behat/includes/
    @@ -0,0 +1,528 @@
    +    $system_path = DRUPAL_ROOT . '/' . drupal_get_path('module', 'behat') . '/tests/features/bootstrap';

Which function do you prefer to get the path to the module? In the patch are two functions used: \Drupal::moduleHandler()->getModule('behat')->getPath() and the good old drupal_get_path('module', 'behat')

jhedstrom’s picture

Behat has it's own command-line utility (similar to PHPUnit), so I'm not sure why we'd need to add and maintain a module that provides a UI. I think it would make more sense to keep the behat.yml.dist file in a similar location to phpunit.xml, and the tests under the core/tests/features directory (and in modules, MODULE/tests/features).

Xano’s picture

Agreed. We've had numerous problems with the Simpletest UI already, especially in combination with PHPUnit. If we discover we need additions to the third party (command-line) tools we use, we should try to fix those upstream instead.

yannickoo’s picture