I’ve published a post on engineering.talis.com entitled Puppet development workflow with Git.
Category Archives: Continuous Integration
Development workflow automation with Gruntjs
I’ve just written a new blog post over on the engineering.talis.com blog entitled Development workflow automation with Gruntjs
Automated Testing Patterns and Smells
Wonderful tech talk by Gerard Meszaros who is a consultant specialising in agile development processes. In this particular presentation Gerard describes a number of common problems encountered when writing and running automated unit and functional tests. He describes these problems as “test smells”, and talks about their root causes. He also suggests possible solutions which he expresses as design patterns for testing. While many of the practices he talks about are directly actionable by developers or testers, it’s important to realise that many also require action from a supportive manager and/or system architect in order to be really achievable.
We use many flavours of xUnit test frameworks in our development group at Talis, and we generally follow a Test First development approach, I found this talk beneficial because many of the issues that Gerard talks about are problems we have encountered and I don’t doubt every development group out there, including ours, can benefit from the insight’s he provides.
The material he uses in his talk and many of the examples are from his book xUnit Test Patterns: Refactoring Test Code, which I’m certainly going to order.
Automated regression testing and CI with Selenium PHP
I’ve been doing some work this iteration on getting Selenium RC integrated into our build process so we can run a suite of automated functional regression tests against our application on each build. The application I’m working on is written in PHP, normally when you use Selenium IDE to record a test script it saves it as a HTML file.
For example a simple test script that goes to Google and verifies that the text “Search:” is present on the screen and the title of the page is “iGoogle” looks like this:
-
-
<html>
-
<head>
-
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-
<title>New Test</title>
-
</head>
-
<body>
-
<table cellpadding="1" cellspacing="1" border="1">
-
<thead>
-
<tr><td rowspan="1" colspan="3">New Test</td></tr>
-
</thead><tbody>
-
<tr>
-
<td>open</td>
-
<td>/ig?hl=en</td>
-
<td></td>
-
</tr>
-
<tr>
-
<td>verifyTextPresent</td>
-
<td>Search:</td>
-
<td></td>
-
</tr>
-
<tr>
-
<td>assertTitle</td>
-
<td>iGoogle</td>
-
<td></td>
-
</tr>
-
-
</tbody></table>
-
</body>
-
</html>
-
You can choose to export the script in several other languages, including PHP, in which case the test script it produces looks like this:
-
span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox", "http://localhost:4444/""/ig?hl=en""Search:""iGoogle"
The Export produces a valid PHPUnit test case that uses the Selenium PHP Client Driver(Selenium.php). Whilst the script is valid and will run you do need add a little more to it before the test will correctly report errors. As it stands all errors captured during the test are added to an array called verificationErrors, by catching the assertion Exceptions that are thrown when an assert fails, in other words if you ran this test as it is, and it did fail you wouldn’t know! To correct this we need to do two things. Firstly, each assert needs to have a message added to it which will printed out in the test report if the assert fails. Secondly we need to modify the tearDown method so that once a test has run, it checks the verificationErrors array, and if any failures have occurred, fails the test. After making these changes the PHP test script looks like this:
-
span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox",
-
"http://localhost:4444/""VERIFICATION ERRORS:""\n""/ig?hl=en""Search:"),
-
"The string Search: was not found""iGoogle",
-
$this->selenium->getTitle(),
-
"The page title did not match iGoogle."
Obviously, I have also given the PHP Class and test function slightly more meaningful names. Now you have a PHP Unit Test case that will use the Selenium PHP Client Driver with Selenium Remote Control to launch a browser, go to the specified URL, and test a couple of assertions. If any of those assertions fail, the tearDown method fails the test … pretty cool, right?
Well now it get’s better. Because the Selenium Client Driver has a published api which is pretty easy to follow, there’s no reason why you can’t just write test cases without using Selenium IDE … for those who want to you could even incorporate this into a TDD process. But for all this to hang together we need to be able to run a build, on a Continuous Integration server which checks out the code, runs unit tests and selenium regression tests against that code line, and only if all tests succeed passes the build.
We are currently using ANT, and CruiseControl to handle our CI/Automated build process. When running the automated suite of tests we need to ensure that the Selenium Remote Control server is also running which creates some complications. The Selenium Remote Control server takes several arguments which can also include the location of a test suite of html based selenium tests – which is really nice because the server will start, execute those tests and then end. Unfortunately you can’t invoke the server and pass it the location of a PHP based test suite. This means you need to find a way to start up the server, then run your tests, and once they are complete, shut the selenium server down.
He are the ANT targets that I have written to achieve this, if anyone can think of better ways of doing this I’d welcome any feedback or suggestions, to run this example you’d simply enter the command “ant selenium” :
-
-
<target name="selenium" depends="clean, init" description="Run the Selenium tests">
-
<parallel>
-
<antcall target="StartRCServer" />
-
<antcall target="RunSeleniumTests" />
-
</parallel>
-
</target>
-
-
<target name="StartRCServer" description="Start the Selenium RC server">
-
<java jar="dependencies/SeleniumRC/lib/selenium-server.jar"
-
fork="true" failonerror="true">
-
<jvmarg value="-Dhttp.proxyHost=host.domain.com"/>
-
<jvmarg value="-Dhttp.proxyPort=80"/>
-
</java>
-
</target>
-
-
<target name="RunSeleniumTests" description="RunAllSeleniumTests">
-
<sleep milliseconds="2000" />
-
<echo message="======================================" />
-
<echo message="Running Selenium Regression Test Suite" />
-
<echo message="======================================" />
-
<exec executable="php"
-
failonerror="false"
-
dir="test/seleniumtests/regressiontests/"
-
resultproperty="regError">
-
<arg line="../../../dependencies/PHPUnit/TextUI/Command.php –log-xml ../../../doc/SeleniumTestReports/RegressionTests/TestReport.xml AllRegressionTests" />
-
</exec>
-
<get taskname="selenium-shutdown"
-
src="http://localhost:4444/selenium-server/driver/?cmd=shutDown"
-
dest="result.txt" ignoreerrors="true" />
-
-
<condition property="regressionTest.err">
-
<or>
-
<equals arg1="1" arg2="${regError}" />
-
<equals arg1="2" arg2="${regError}" />
-
</or>
-
</condition>
-
-
<fail if="regressionTest.err" message="ERROR: Selenium Regression Tests Failed" />
-
</target>
-
A couple of notes, the reason I have to use a conditional check at the end of the selenium target is because, if the exec task that runs the PHP tests was set to failonerror=true then the build would never reach the next line which shuts the Selenium RC server down. To ensure that always happens I have to set the exec to failonerror=false, but this means I have to check what the result was from the exec. Which if successful will return 0, if test failures exist will return 1, and if there were any errors (preventing a test to be exectuted ) will return 2. Hence the conditional check sets regressionTest.err if either of these latter two conditions are true.
Also in order to start up the server, which could take up to a second, but can’t be sure precisely how long. I have to use the Ant Parallel task, which calls the task to start the server and the task to run the tests at the same time. The task to run the tests has a 2 second sleep in it, which should be more than enough time to allow the server to start. This all kind of feels a little clunky, but at the moment it does work very well.
In a nutshell, thats how you integrate PHP based Automated Selenium Regression tests into a continuous build.
-
Continuous Integration with PHP
I’m in the process of setting up a continuous integration environment for a new PHP project I’m starting. On our previous project, which was Java based, we used the following tools to support similar requirements on that project in order to allow us to implement the project using a test driven approach and automate build generation:
- Cruise Control – For setting up a Continuous Build process.
- Ant – A Java based build tool.
- JUnit – A Java Based xunit testing framework
- PMD – A tool that checks for common coding standards violations.
- Corbetura – A Java based code coverage too that calculates the percentage of code accessed by unit tests
I’ve managed to set up an analogous process for our PHP project using the following tools:
- Cruise Control – For setting up a Continuous Build process.
- Ant – A Java based build tool.
- PHPUnit – An Xunit testing framework for PHP
- PHP_CodeSniffer – A PHP tool that checks for common coding standards violations.
- Xdebug – Debugging tool, which when combined with PHPUnit, can provide Code Coverage Metrics
It seems to work quite well, here’s the relatively simple ant build script that controls it all.
-
-
<?xml version="1.0" encoding="UTF-8"?>
-
<project default="all" name="DummyProject" basedir=".">
-
<target name="all" depends="clean, init, test, sniff" />
-
-
<target name="clean">
-
<delete dir="doc/CodeCoverage" />
-
<delete dir="doc/UnitTestReport" />
-
</target>
-
-
<target name="init">
-
<mkdir dir="doc/CodeCoverage" />
-
<mkdir dir="doc/UnitTestReport" />
-
</target>
-
-
<target name="test" description="Run PHPUnit tests">
-
<exec dir="./" executable="TestRunner.bat" failonerror="true">
-
</exec>
-
</target>
-
-
<target name="sniff" description="">
-
<exec dir="./" executable="Sniffer.bat" failonerror="true">
-
</exec>
-
</target>
-
</project>
-
I’m currently running this on a windows machine although it’s trivial to change it work in an *ix based environment which I’ll probably configure in the next day or so. I had a couple of problems installing PHP_CodeSniffer although it was because I hadn’t installed PEAR properly. If you have any problems installing PHP_CodeSniffer under Windows then follow these instructions:
To install PEAR under windows do the following, which assumes you have PHP5.2x installed in c:\php :
cd c:\\php go-pear.bat
The interactive installer presents you with some options, if you follow the defaults you should be fine.
Once PEAR has installed you can install PHP_CodeSniffer like this:cd c:\\php pear install PHP_CodeSniffer-beta
This will download the PHP_CodeSniffer package and install into into your php/PEAR folder.
Once this is done you can check to see if it has installed by calling phpcs with -h flag which will produce the following:
C:\\php>phpcs -h Usage: phpcs [-nlvi] [--report=
] [--standard= ] [--generator= ] [--extensions= ] ... -n Do not print warnings -l Local directory only, no recursion -v[v][v] Print verbose output -i Show a list of installed coding standards --help Print this help message --version Print version information One or more files and/or directories to check A comma separated list of file extensions to check (only valid if checking a directory) The name of the coding standard to use The name of a doc genertor to use (forces doc generation instead of checking) Print either the "full" or "summary" report Now try it out …
C:\\php>phpcs C:\\Projects\\dummyproject\\test\\ FILE: C:\\Projects\\dummyproject\\test\\AllTests.php -------------------------------------------------------------------------------- FOUND 7 ERROR(S) AND 0 WARNING(S) AFFECTING 7 LINE(S) -------------------------------------------------------------------------------- 1 | ERROR | End of line character is invalid; expected "\n" but found "\r\n" 2 | ERROR | Missing file doc comment 3 | ERROR | Constants must be uppercase; expected 'PHPUNIT_MAIN_METHOD' but | | found 'PHPUnit_MAIN_METHOD' 12 | ERROR | Missing class doc comment 14 | ERROR | Missing function doc comment 19 | ERROR | Missing function doc comment 30 | ERROR | Constants must be uppercase; expected PHPUNIT_MAIN_METHOD but | | found PHPUnit_MAIN_METHOD --------------------------------------------------------------------------------
Check the documentation for what the various command line switches do, but you can generate summary reports as well on an entire directory tree:
C:\\php>phpcs --report=summary --standard=Squiz C:\\Projects\\dummyproject\\test\\ PHP CODE SNIFFER REPORT SUMMARY -------------------------------------------------------------------------------- FILE ERRORS WARNINGS -------------------------------------------------------------------------------- C:\\Projects\\dummyproject\\test\\AllTests.php 24 0 C:\\Projects\\dummyproject\\test\\NodeTest.php 10 0 C:\\Projects\\dummyproject\\test\\NodeTypeTest.php 11 0 -------------------------------------------------------------------------------- A TOTAL OF 45 ERROR(S) AND 0 WARNING(S) WERE FOUND IN 3 FILE(S) --------------------------------------------------------------------------------
…
Overall I’m quite happy with this set up which for the most part was pretty straight forward. I have no doubt it will evolve over time but I think it’s a good foundation on which to build upon.
The Role of Testing and QA in Agile Software Development
In our development group at Talis We’ve been thinking a lot about how to test more effectively in an agile environment. One of my colleagues, sent me a link to this excellent talk by Scott Ambler which examines the Role of Testing and QA in Agile Software Development.
Much of the talk is really an introduction to Agile Development which is beneficial to listen to because Scott dispels some of the myths around agile, and offers his own views on best practises using some examples. It does get a bit heated around the 45 minute mark when he’s discussing Database Refactoring, some of the people in the audience were struggling with the idea he was presenting which I felt was fairly simple. If you really want to skip all that try to forward to the 50 minute mark where he starts talking about sandboxes. What I will say is that if your having difficulty getting Agile accepted into your organisation then this might be a video you want to show your managers since it covers all the major issues and benefits.
Here’s some of the tips he has with regard to testing and improving quality:
- Do Test Driven Development, the unit tests are the detailed design, they force developers to think about the design. Call it Just-in-time design.
- Use Continuous Integration to build and run unit tests on each check-in to trunk.
- Acceptance Tests are primary artefacts. Don’t bother with a requirements document simply maintain the acceptance test since the reality is that all testing teams will do is take that requirement and copy it into an acceptance test, so why introduce a traceability issue when you don’t need it. http://www.agilemodeling.com/essays/singleSourceInformation.htm
- Use Standards and Guidelines to help ensure teams are creating consistent artefacts.
- Code Reviews and Inspections are not a best practise. They are used to compensate for people working alone, not sharing their work, not communicating, poor teamwork, and poor collaboration. Guru checks output is an anti-pattern. Working together, pairing, good communication, teamwork should negate the need for code reviews and inspections.
- Short feedback loop is extremely important. The faster you can get testing results and feedback from stakeholders the better.
- Testers need to be flexible, willing to pick up new skills, need to be able to work with others. They need to be generalising specialists. The trend that is emerging in agile or the emerging belief is that there is no need for traditional testers.
Scott is a passionate speaker and very convincing, some of the points he makes are quite controversial yet hard to ignore – especially his argument that traditional testers are becoming less necessary. I’m not sure I agree with all his views yet he has succeeded in forcing me to challenge my own views which I need to mull over and for that reason alone watching his talk has been invaluable.
Buildix – KNOPPIX Based Linux Distro
Heard about this on a Google tech talk, its a linux distro distributed as an rpm or VM ( which I have downloaded ), it provides a pre configured install of CruiseControl, Trac and Subversion …
Martin Fowler mentions it on his site, obviously hes recommending it because of his involvement with thoughtworks. Nevertheless its a great idea, based around shortening iteration zero.
There’s a good blog by one of the “authors” describing why they made this, which is well worth reading.
I’ve found it easy to use and configure and it gives a great head start to project teams trying to get continuous integration environments set up.
-