Automated regression testing and CI with Selenium PHP

I’ve been doing some work this iteration on getting Selenium RC integrated into our build process so we can run a suite of automated functional regression tests against our application on each build. The application I’m working on is written in PHP, normally when you use Selenium IDE to record a test script it saves it as a HTML file.

For example a simple test script that goes to Google and verifies that the text “Search:” is present on the screen and the title of the page is “iGoogle” looks like this:

  1.  
  2. <html>
  3. <head>
  4. <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  5. <title>New Test</title>
  6. </head>
  7. <body>
  8. <table cellpadding="1" cellspacing="1" border="1">
  9. <thead>
  10. <tr><td rowspan="1" colspan="3">New Test</td></tr>
  11. </thead><tbody>
  12. <tr>
  13.     <td>open</td>
  14.     <td>/ig?hl=en</td>
  15.     <td></td>
  16. </tr>
  17. <tr>
  18.     <td>verifyTextPresent</td>
  19.     <td>Search:</td>
  20.     <td></td>
  21. </tr>
  22. <tr>
  23.     <td>assertTitle</td>
  24.     <td>iGoogle</td>
  25.     <td></td>
  26. </tr>
  27.  
  28. </tbody></table>
  29. </body>
  30. </html>
  31.  

You can choose to export the script in several other languages, including PHP, in which case the test script it produces looks like this:

  1. span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox", "http://localhost:4444/""/ig?hl=en""Search:""iGoogle"

The Export produces a valid PHPUnit test case that uses the Selenium PHP Client Driver(Selenium.php). Whilst the script is valid and will run you do need add a little more to it before the test will correctly report errors. As it stands all errors captured during the test are added to an array called verificationErrors, by catching the assertion Exceptions that are thrown when an assert fails, in other words if you ran this test as it is, and it did fail you wouldn’t know! To correct this we need to do two things. Firstly, each assert needs to have a message added to it which will printed out in the test report if the assert fails. Secondly we need to modify the tearDown method so that once a test has run, it checks the verificationErrors array, and if any failures have occurred, fails the test. After making these changes the PHP test script looks like this:

  1. span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox",
  2.                          "http://localhost:4444/""VERIFICATION ERRORS:""\n""/ig?hl=en""Search:"),
  3.                "The string Search: was not found""iGoogle",
  4.                    $this->selenium->getTitle(),
  5.                    "The page title did not match iGoogle."

Obviously, I have also given the PHP Class and test function slightly more meaningful names. Now you have a PHP Unit Test case that will use the Selenium PHP Client Driver with Selenium Remote Control to launch a browser, go to the specified URL, and test a couple of assertions. If any of those assertions fail, the tearDown method fails the test … pretty cool, right?

Well now it get’s better. Because the Selenium Client Driver has a published api which is pretty easy to follow, there’s no reason why you can’t just write test cases without using Selenium IDE … for those who want to you could even incorporate this into a TDD process. But for all this to hang together we need to be able to run a build, on a Continuous Integration server which checks out the code, runs unit tests and selenium regression tests against that code line, and only if all tests succeed passes the build.

We are currently using ANT, and CruiseControl to handle our CI/Automated build process. When running the automated suite of tests we need to ensure that the Selenium Remote Control server is also running which creates some complications. The Selenium Remote Control server takes several arguments which can also include the location of a test suite of html based selenium tests – which is really nice because the server will start, execute those tests and then end. Unfortunately you can’t invoke the server and pass it the location of a PHP based test suite. This means you need to find a way to start up the server, then run your tests, and once they are complete, shut the selenium server down.

He are the ANT targets that I have written to achieve this, if anyone can think of better ways of doing this I’d welcome any feedback or suggestions, to run this example you’d simply enter the command “ant selenium” :

  1.  
  2. <target name="selenium" depends="clean, init" description="Run the Selenium tests">
  3.   <parallel>
  4.     <antcall target="StartRCServer" />
  5.     <antcall target="RunSeleniumTests" />
  6.   </parallel>
  7. </target>
  8.                
  9. <target name="StartRCServer" description="Start the Selenium RC server">
  10.   <java jar="dependencies/SeleniumRC/lib/selenium-server.jar"
  11.         fork="true" failonerror="true">
  12.     <jvmarg value="-Dhttp.proxyHost=host.domain.com"/>
  13.     <jvmarg value="-Dhttp.proxyPort=80"/>
  14.   </java>
  15. </target>
  16.    
  17. <target name="RunSeleniumTests" description="RunAllSeleniumTests">    
  18.   <sleep milliseconds="2000" />
  19.   <echo message="======================================" />
  20.   <echo message="Running Selenium Regression Test Suite" />
  21.   <echo message="======================================" />
  22.   <exec executable="php"
  23.        failonerror="false"
  24.        dir="test/seleniumtests/regressiontests/"
  25.        resultproperty="regError">
  26.        <arg line="../../../dependencies/PHPUnit/TextUI/Command.php –log-xml ../../../doc/SeleniumTestReports/RegressionTests/TestReport.xml AllRegressionTests" />
  27.   </exec>
  28.   <get taskname="selenium-shutdown"
  29.     src="http://localhost:4444/selenium-server/driver/?cmd=shutDown"
  30.     dest="result.txt" ignoreerrors="true" />
  31.                    
  32.   <condition property="regressionTest.err">
  33.     <or>
  34.        <equals arg1="1" arg2="${regError}" />
  35.        <equals arg1="2" arg2="${regError}" />
  36.     </or>
  37.   </condition>
  38.  
  39.   <fail if="regressionTest.err" message="ERROR: Selenium Regression Tests Failed" />               
  40. </target>
  41.  

A couple of notes, the reason I have to use a conditional check at the end of the selenium target is because, if the exec task that runs the PHP tests was set to failonerror=true then the build would never reach the next line which shuts the Selenium RC server down. To ensure that always happens I have to set the exec to failonerror=false, but this means I have to check what the result was from the exec. Which if successful will return 0, if test failures exist will return 1, and if there were any errors (preventing a test to be exectuted ) will return 2. Hence the conditional check sets regressionTest.err if either of these latter two conditions are true.

Also in order to start up the server, which could take up to a second, but can’t be sure precisely how long. I have to use the Ant Parallel task, which calls the task to start the server and the task to run the tests at the same time. The task to run the tests has a 2 second sleep in it, which should be more than enough time to allow the server to start. This all kind of feels a little clunky, but at the moment it does work very well.

In a nutshell, thats how you integrate PHP based Automated Selenium Regression tests into a continuous build.

Continuous Integration with PHP

I’m in the process of setting up a continuous integration environment for a new PHP project I’m starting. On our previous project, which was Java based, we used the following tools to support similar requirements on that project in order to allow us to implement the project using a test driven approach and automate build generation:

  • Cruise Control – For setting up a Continuous Build process.
  • Ant – A Java based build tool.
  • JUnit – A Java Based xunit testing framework
  • PMD – A tool that checks for common coding standards violations.
  • Corbetura – A Java based code coverage too that calculates the percentage of code accessed by unit tests

I’ve managed to set up an analogous process for our PHP project using the following tools:

  • Cruise Control – For setting up a Continuous Build process.
  • Ant – A Java based build tool.
  • PHPUnit – An Xunit testing framework for PHP
  • PHP_CodeSniffer – A PHP tool that checks for common coding standards violations.
  • Xdebug – Debugging tool, which when combined with PHPUnit, can provide Code Coverage Metrics

It seems to work quite well, here’s the relatively simple ant build script that controls it all.

  1.  
  2. <?xml version="1.0" encoding="UTF-8"?>
  3. <project default="all" name="DummyProject" basedir=".">
  4.     <target name="all" depends="clean, init, test, sniff" />
  5.  
  6.     <target name="clean">
  7.         <delete dir="doc/CodeCoverage" />
  8.         <delete dir="doc/UnitTestReport" />
  9.     </target>
  10.  
  11.     <target name="init">
  12.         <mkdir dir="doc/CodeCoverage" />
  13.         <mkdir dir="doc/UnitTestReport" />
  14.     </target>
  15.  
  16.     <target name="test" description="Run PHPUnit tests">
  17.         <exec dir="./" executable="TestRunner.bat" failonerror="true">
  18.         </exec>
  19.     </target>
  20.    
  21.     <target name="sniff" description="">
  22.         <exec dir="./" executable="Sniffer.bat" failonerror="true">
  23.         </exec>
  24.     </target>
  25. </project>
  26.  

I’m currently running this on a windows machine although it’s trivial to change it work in an *ix based environment which I’ll probably configure in the next day or so. I had a couple of problems installing PHP_CodeSniffer although it was because I hadn’t installed PEAR properly. If you have any problems installing PHP_CodeSniffer under Windows then follow these instructions:

To install PEAR under windows do the following, which assumes you have PHP5.2x installed in c:\php :

  cd c:\\php
  go-pear.bat

The interactive installer presents you with some options, if you follow the defaults you should be fine.
Once PEAR has installed you can install PHP_CodeSniffer like this:

  cd c:\\php
  pear install PHP_CodeSniffer-beta

This will download the PHP_CodeSniffer package and install into into your php/PEAR folder.

Once this is done you can check to see if it has installed by calling phpcs with -h flag which will produce the following:

C:\\php>phpcs -h
Usage: phpcs [-nlvi] [--report=] [--standard=]
    [--generator=] [--extensions=]  ...
        -n           Do not print warnings
        -l           Local directory only, no recursion
        -v[v][v]     Print verbose output
        -i           Show a list of installed coding standards
        --help       Print this help message
        --version    Print version information
               One or more files and/or directories to check
         A comma separated list of file extensions to check
                     (only valid if checking a directory)
           The name of the coding standard to use
          The name of a doc genertor to use
                     (forces doc generation instead of checking)
             Print either the "full" or "summary" report

Now try it out …

C:\\php>phpcs C:\\Projects\\dummyproject\\test\\

FILE: C:\\Projects\\dummyproject\\test\\AllTests.php
--------------------------------------------------------------------------------
FOUND 7 ERROR(S) AND 0 WARNING(S) AFFECTING 7 LINE(S)
--------------------------------------------------------------------------------
  1 | ERROR | End of line character is invalid; expected "\n" but found "\r\n"
  2 | ERROR | Missing file doc comment
  3 | ERROR | Constants must be uppercase; expected 'PHPUNIT_MAIN_METHOD' but
    |       | found 'PHPUnit_MAIN_METHOD'
 12 | ERROR | Missing class doc comment
 14 | ERROR | Missing function doc comment
 19 | ERROR | Missing function doc comment
 30 | ERROR | Constants must be uppercase; expected PHPUNIT_MAIN_METHOD but
    |       | found PHPUnit_MAIN_METHOD
--------------------------------------------------------------------------------

Check the documentation for what the various command line switches do, but you can generate summary reports as well on an entire directory tree:

C:\\php>phpcs --report=summary --standard=Squiz C:\\Projects\\dummyproject\\test\\

PHP CODE SNIFFER REPORT SUMMARY
--------------------------------------------------------------------------------
FILE                                                            ERRORS  WARNINGS
--------------------------------------------------------------------------------
C:\\Projects\\dummyproject\\test\\AllTests.php                  24      0
C:\\Projects\\dummyproject\\test\\NodeTest.php                  10      0
C:\\Projects\\dummyproject\\test\\NodeTypeTest.php              11      0
--------------------------------------------------------------------------------
A TOTAL OF 45 ERROR(S) AND 0 WARNING(S) WERE FOUND IN 3 FILE(S)
--------------------------------------------------------------------------------

Overall I’m quite happy with this set up which for the most part was pretty straight forward. I have no doubt it will evolve over time but I think it’s a good foundation on which to build upon.

The Role of Testing and QA in Agile Software Development

In our development group at Talis We’ve been thinking a lot about how to test more effectively in an agile environment. One of my colleagues, sent me a link to this excellent talk by Scott Ambler which examines the Role of Testing and QA in Agile Software Development.

Much of the talk is really an introduction to Agile Development which is beneficial to listen to because Scott dispels some of the myths around agile, and offers his own views on best practises using some examples. It does get a bit heated around the 45 minute mark when he’s discussing Database Refactoring, some of the people in the audience were struggling with the idea he was presenting which I felt was fairly simple. If you really want to skip all that try to forward to the 50 minute mark where he starts talking about sandboxes. What I will say is that if your having difficulty getting Agile accepted into your organisation then this might be a video you want to show your managers since it covers all the major issues and benefits.

Here’s some of the tips he has with regard to testing and improving quality:

  • Do Test Driven Development, the unit tests are the detailed design, they force developers to think about the design. Call it Just-in-time design.
  • Use Continuous Integration to build and run unit tests on each check-in to trunk.
  • Acceptance Tests are primary artefacts. Don’t bother with a requirements document simply maintain the acceptance test since the reality is that all testing teams will do is take that requirement and copy it into an acceptance test, so why introduce a traceability issue when you don’t need it. http://www.agilemodeling.com/essays/singleSourceInformation.htm
  • Use Standards and Guidelines to help ensure teams are creating consistent artefacts.
  • Code Reviews and Inspections are not a best practise. They are used to compensate for people working alone, not sharing their work, not communicating, poor teamwork, and poor collaboration. Guru checks output is an anti-pattern. Working together, pairing, good communication, teamwork should negate the need for code reviews and inspections.
  • Short feedback loop is extremely important. The faster you can get testing results and feedback from stakeholders the better.
  • Testers need to be flexible, willing to pick up new skills, need to be able to work with others. They need to be generalising specialists. The trend that is emerging in agile or the emerging belief is that there is no need for traditional testers.

Scott is a passionate speaker and very convincing, some of the points he makes are quite controversial yet hard to ignore – especially his argument that traditional testers are becoming less necessary. I’m not sure I agree with all his views yet he has succeeded in forcing me to challenge my own views which I need to mull over and for that reason alone watching his talk has been invaluable.

Buildix – KNOPPIX Based Linux Distro

Heard about this on a Google tech talk, its a linux distro distributed as an rpm or VM ( which I have downloaded ), it provides a pre configured install of CruiseControl, Trac and Subversion …

Martin Fowler mentions it on his site, obviously hes recommending it because of his involvement with thoughtworks. Nevertheless its a great idea, based around shortening iteration zero.

There’s a good blog by one of the “authors” describing why they made this, which is well worth reading.

I’ve found it easy to use and configure and it gives a great head start to project teams trying to get continuous integration environments set up.