Automated regression testing and CI with Selenium PHP

I’ve been doing some work this iteration on getting Selenium RC integrated into our build process so we can run a suite of automated functional regression tests against our application on each build. The application I’m working on is written in PHP, normally when you use Selenium IDE to record a test script it saves it as a HTML file.

For example a simple test script that goes to Google and verifies that the text “Search:” is present on the screen and the title of the page is “iGoogle” looks like this:

  1.  
  2. <html>
  3. <head>
  4. <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  5. <title>New Test</title>
  6. </head>
  7. <body>
  8. <table cellpadding="1" cellspacing="1" border="1">
  9. <thead>
  10. <tr><td rowspan="1" colspan="3">New Test</td></tr>
  11. </thead><tbody>
  12. <tr>
  13.     <td>open</td>
  14.     <td>/ig?hl=en</td>
  15.     <td></td>
  16. </tr>
  17. <tr>
  18.     <td>verifyTextPresent</td>
  19.     <td>Search:</td>
  20.     <td></td>
  21. </tr>
  22. <tr>
  23.     <td>assertTitle</td>
  24.     <td>iGoogle</td>
  25.     <td></td>
  26. </tr>
  27.  
  28. </tbody></table>
  29. </body>
  30. </html>
  31.  

You can choose to export the script in several other languages, including PHP, in which case the test script it produces looks like this:

  1. span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox", "http://localhost:4444/""/ig?hl=en""Search:""iGoogle"

The Export produces a valid PHPUnit test case that uses the Selenium PHP Client Driver(Selenium.php). Whilst the script is valid and will run you do need add a little more to it before the test will correctly report errors. As it stands all errors captured during the test are added to an array called verificationErrors, by catching the assertion Exceptions that are thrown when an assert fails, in other words if you ran this test as it is, and it did fail you wouldn’t know! To correct this we need to do two things. Firstly, each assert needs to have a message added to it which will printed out in the test report if the assert fails. Secondly we need to modify the tearDown method so that once a test has run, it checks the verificationErrors array, and if any failures have occurred, fails the test. After making these changes the PHP test script looks like this:

  1. span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox",
  2.                          "http://localhost:4444/""VERIFICATION ERRORS:""\n""/ig?hl=en""Search:"),
  3.                "The string Search: was not found""iGoogle",
  4.                    $this->selenium->getTitle(),
  5.                    "The page title did not match iGoogle."

Obviously, I have also given the PHP Class and test function slightly more meaningful names. Now you have a PHP Unit Test case that will use the Selenium PHP Client Driver with Selenium Remote Control to launch a browser, go to the specified URL, and test a couple of assertions. If any of those assertions fail, the tearDown method fails the test … pretty cool, right?

Well now it get’s better. Because the Selenium Client Driver has a published api which is pretty easy to follow, there’s no reason why you can’t just write test cases without using Selenium IDE … for those who want to you could even incorporate this into a TDD process. But for all this to hang together we need to be able to run a build, on a Continuous Integration server which checks out the code, runs unit tests and selenium regression tests against that code line, and only if all tests succeed passes the build.

We are currently using ANT, and CruiseControl to handle our CI/Automated build process. When running the automated suite of tests we need to ensure that the Selenium Remote Control server is also running which creates some complications. The Selenium Remote Control server takes several arguments which can also include the location of a test suite of html based selenium tests – which is really nice because the server will start, execute those tests and then end. Unfortunately you can’t invoke the server and pass it the location of a PHP based test suite. This means you need to find a way to start up the server, then run your tests, and once they are complete, shut the selenium server down.

He are the ANT targets that I have written to achieve this, if anyone can think of better ways of doing this I’d welcome any feedback or suggestions, to run this example you’d simply enter the command “ant selenium” :

  1.  
  2. <target name="selenium" depends="clean, init" description="Run the Selenium tests">
  3.   <parallel>
  4.     <antcall target="StartRCServer" />
  5.     <antcall target="RunSeleniumTests" />
  6.   </parallel>
  7. </target>
  8.                
  9. <target name="StartRCServer" description="Start the Selenium RC server">
  10.   <java jar="dependencies/SeleniumRC/lib/selenium-server.jar"
  11.         fork="true" failonerror="true">
  12.     <jvmarg value="-Dhttp.proxyHost=host.domain.com"/>
  13.     <jvmarg value="-Dhttp.proxyPort=80"/>
  14.   </java>
  15. </target>
  16.    
  17. <target name="RunSeleniumTests" description="RunAllSeleniumTests">    
  18.   <sleep milliseconds="2000" />
  19.   <echo message="======================================" />
  20.   <echo message="Running Selenium Regression Test Suite" />
  21.   <echo message="======================================" />
  22.   <exec executable="php"
  23.        failonerror="false"
  24.        dir="test/seleniumtests/regressiontests/"
  25.        resultproperty="regError">
  26.        <arg line="../../../dependencies/PHPUnit/TextUI/Command.php –log-xml ../../../doc/SeleniumTestReports/RegressionTests/TestReport.xml AllRegressionTests" />
  27.   </exec>
  28.   <get taskname="selenium-shutdown"
  29.     src="http://localhost:4444/selenium-server/driver/?cmd=shutDown"
  30.     dest="result.txt" ignoreerrors="true" />
  31.                    
  32.   <condition property="regressionTest.err">
  33.     <or>
  34.        <equals arg1="1" arg2="${regError}" />
  35.        <equals arg1="2" arg2="${regError}" />
  36.     </or>
  37.   </condition>
  38.  
  39.   <fail if="regressionTest.err" message="ERROR: Selenium Regression Tests Failed" />               
  40. </target>
  41.  

A couple of notes, the reason I have to use a conditional check at the end of the selenium target is because, if the exec task that runs the PHP tests was set to failonerror=true then the build would never reach the next line which shuts the Selenium RC server down. To ensure that always happens I have to set the exec to failonerror=false, but this means I have to check what the result was from the exec. Which if successful will return 0, if test failures exist will return 1, and if there were any errors (preventing a test to be exectuted ) will return 2. Hence the conditional check sets regressionTest.err if either of these latter two conditions are true.

Also in order to start up the server, which could take up to a second, but can’t be sure precisely how long. I have to use the Ant Parallel task, which calls the task to start the server and the task to run the tests at the same time. The task to run the tests has a 2 second sleep in it, which should be more than enough time to allow the server to start. This all kind of feels a little clunky, but at the moment it does work very well.

In a nutshell, thats how you integrate PHP based Automated Selenium Regression tests into a continuous build.

Interesting PHP / XSL Cross Platform Defect

Came across a rather interesting cross platform problem whilst trying to perform a rather simple xsl transformation using PHP. To understand the problem take a look at this little snippet of XSL and see if you can spot what’s wrong with it:

  1.  
  2.   <xsl:template name="someTemplate">    
  3.     <xsl:for-each select="/rdf:RDF/rss:item" >
  4.     <xsl:param name="foo"/>
  5.     …
  6.     </xsl:for-each>
  7.   </xsl:template>
  8.  

The flaw in the snippet above is that it defines an xsl:param inside an xsl:for-each which is completely invalid. Now earlier today one of my colleagues was refactoring some XSL and whilst he was copying code from one template into another he inadvertently made the mistake above. Now you’d like to think that when you try to run the XSL through PHP that the invalid markup would throw an error. Well under PHP 5.2.2 on Window’s it doesn’t throw an error – in fact the XSL runs and performs the transformation. However when you run the same XSL under PHP running on Linux it quite rightly throws an error and informs you that the xsl:param is invalid.

I haven’t looked too deeply into why the invalid markup is accepted under Windows, I suspect it might have something to do with the php_xsl.dll that is comes as part of PHP 5.2.2, but it is something that made my jaw drop. Has anyone else experienced this?

Calling PHP Functions from XSL

Craig, a colleague of mine who newly joined our development team at Talis showed me this neat little trick. Many things are far easier to do in PHP than they are in XSL, and some things simply can’t be done in pure XSL. A solution is to call PHP functions directly from within your XSL.

1) In your xsl stylesheet add:

   namespace xmlns:php="http://php.net/xsl"
   exclude-result-prefixes="php"

2) To call the php function and access the result use:

  1.  
  2.   <!– for string use this –>
  3.   <xsl:value-of select="php:functionString(‘phpFunctionName’, /xpath)"/>
  4.  
  5.   <!– for DOM Nodes use this –>
  6.   <xsl:copy-of select="php:function(‘phpFunctionName’, /xpath)"/>
  7.  

You can pass as many parameters as you want to either php:function or php:functionString – the latter merely converts output to a string and otherwise they are identical.

3) you must register them with the XSL Transformer:

  1. span class=”co1″>// This is the important call for this functionality

4) In your php function, access parameters passed in as strings as if they are a php string. If you pass a dom structure as a parameter then you need to access it along the lines of:

  1. span class=”st0″>’namespace’, ‘element-name’

$DomList will include the root element of the XPath used to call the PHP function

If you want to dump what you pass to PHP as a string you need to do:

  1. span class=”co1″>// note this function isn’t yet documented in the PHP manual !
  2.  }
  3.  

It’s a very useful feature … good luck with it.

Greeks vs Romans. Adaptive vs Plain-Driven development.

Came across this wonderful essay over at Hacknot today. The essay starts off by decrying this assertion made by Raghavendra Rao Loka, in February’s 2007 edition of IEEE Software:

“Writing and maintaining software are not engineering activities. So it’s not clear why we call software development software engineering.”

The author of the essay goes onto offer a brief rebuttal of this this based on some comments by Steve McConnell, and pointed out quite rightly in my opinion that:

Software development is slowly and surely moving its way out of the mire of superstition and belief into the realm of empiricism and reason. The transition closely parallels those already made in other disciplines, such as medicine’s evolution from witchcraft into medical science.

What I found really insightful though was how the author likened these two views ( Loka vs McConnell ) to another conflict:

These two represent the age-old conflict between the engineers and the artists, the sciences and the humanities. In the software development domain, some have previously characterized it as the battle between the Greeks and the Romans.

He then applies this same metaphor to the wider issue of adaptive vs the plain-driven approaches to software development and in doing so offers an interesting perspective on the two schools of thought:

By now you will probably have recognized the analogue between the cultural divide separating the Greeks and Romans and the methodological divide between adaptive and plan-driven approaches to software development.

We can think of the Greeks as representative of Agile Methods. The focus is upon loosely coordinated individuals whose talent and passion combine to produce great artefacts of creativity. Any organizational shortcomings the team might experience are overcome by the cleverness and skilful adaptivity of the contributors.

Alternatively, we can consider the Romans as representative of plan driven methods, in which the carefully engineered and executed efforts of competent and well educated practitioners combine to produce works of great size and complexity. The shortcomings of any of the individuals involved are overcome by the systematic methods and peer review structure within which they work.

It’s a wonderful analogy that I think illustrates some of the differences between the two approaches in a way that most practitioners would readily understand and perhaps even agree with. Which is the right approach? I guess it depends on you, your project but most importantly the kind of people you have in your team. I have certainly spent a considerable amount of time extolling what I believe are the virtues of the agile methodology on this blog.

I spent several years working in a very Roman-esque organisation and I’d probably argue that the competent and well educated practitioners in such organisations rarely fit either of those terms. Probably because the very nature of such systems require those working within that confine to accept a certain level of conformity. Consequently individual flair, creativity or even imagination are less important than the uniformity that such organisations require – a sort of fill-in-the-blanks approach to development where your responsible for only those small pieces assigned to you, and whether you necessarily know what the bigger picture is or even understand it isn’t important because some more senior in the team who assigns you your tasks does know. This often results in developers feeling disaffected or perhaps less likely to feel any sense of personal ownership or even responsibility, because they know they aren’t in a position to be responsible for anything.

I do find myself agreeing that part of what makes agile teams successful is this notion of heroics on the part of individuals. It does work well when you have talented individuals who can work together in a reasonably small scale. How well does that scales up? … I can’t personally answer that question?  I’ve read plenty of accounts and listened to the likes of Scott Ambler explain how agile can work on large scale projects. I must admit I’m not sure if I’m convinced of this. … but that’s only because of my earlier point that this decision needs, in part, to be based on the type of people you have your in team.

Our development group at Talis is quite small,at the moment no more than 20 people – and it’s not for a lack of trying, were constantly trying to recruit people but we look specifically for skilled developers who could fit into the culture that we have. Individuals who are self motivated, self learners who take a great deal of personal pride in what they do, take responsibility and also want to have a sense of ownership over what they are building. As a result we tend to be quite jealous of who we let into the team, but again that’s probably something that is unique to our little group and says more about us than the Agile process.

I do find myself partly agreeing with the conclusion the author of the essay makes:

I favour a development method that is predominantly Greek, with sufficient Roman thrown in to keep things under control and prevent needless wheel spinning. The Greeks are a great bunch of guys, but they tend to put too much emphasis on individual heroics, and pay too little attention to the needs of the maintenance programmers that will following in their path. The Romans are a little stuffy and formal, but they really know how to keep things under control and keep your project running smoothly and predictably

 I don’t agree that, in terms of software development,  the Roman approach was ever truly able to keep things under control or necessarily running smoothly even predictably. If it was we wouldn’t have so many examples of failed projects that went over budget, went over time and totally failed to deliver what the customer actually wanted.

I personally do prefer a predominantly Greek approach and I do see methodologies such as SCRUM imposing the right level of Roman-esque control and structure to keep the project moving along smoothly and predictably, with the added bonus that the customer actually gets what they want.

JavaScript can provide a richer user experience without compromising Accessibility.

When Rob and I originally put Cenote together one of the constraints we had to work within was that the application must work without relying on JavaScript. This is primarily because relying heavily on JavaScript can introduce some accessibility issues, these issues include:

  • Making Navigation difficult: Site’s that rely on JavaScript can make it very difficult to navigate around a site using a keyboard or other assistive technologies, and impossible if JavaScript is disabled in the browser.
  • Hides Content: Sites that rely heavily on JavaScript can present content and functionality to the user that is not accessible to assistive technologies, such as Text Readers.
  • Removes User Control: User’s have no control over automated content changes. For user’s have no accessibility needs this isn’t a problem, but for users who rely on assisstive technology there is no way of knowing that content has changed. This can prove to be very confusing/disorienting since this can also involve altering normal browser functionality and triggering events that the user is not aware of.

This doesn’t necessarily mean that pages that contain JavaScript cannot be fully accessible, such pages can be accessible if the functionality is device independent ( in other words it does not require only a mouse or only a keyboard to work ) and the content is always available to assistive technologies.

So based on this knowledge Rob and I made the conscious decision that what we would do was develop Cenote first without using any JavaScript at all. This would ensure that the content rendered on the pages was always available to assistive technologies. We agreed that we could then use JavaScript to enhance the user experience by using JavaScript to manipulate the DOM of the page in the browser.

To see this in action visit this page in Cenote, if you scroll to the bottom of the page you’ll notice a list of libraries that you navigate through using a paging control (pictured). You’ll also notice that the book jacket in the top left hand corner of the page that has a nice reflection effect. Both these effects are made possible through JavaScript, and provide a slightly richer user experience. To see what I mean try turning off JavaScript in your browser1, and hit refresh. The page now displays the book jacket without the reflection and the complete list of libraries without the paging control.

So how is all this achieved? Well lets take a closer look at how the paging control. The snippet of html below shows a simplified2 version of markup that actually represents the items in the list3, take particulare note of the empty div after the unordered list this is the placeholder we inject the paging control into:

  1.  
  2. <ul id="Libraries">
  3.   <li><a class="item-library" href="#">Aberdeenshire Libraries</a></li>
  4.   <li><a class="item-library" href="#">Barnsley MD Library</a></li>
  5.   <li><a class="item-library" href="#">Blackburn</a></li>
  6. </ul>
  7. <div id="pagingcontrol"></div>
  8.  

Now when you view the page without JavaScript enabled the browser renders this markup exactly as it is. Accessibility or Assitive technology only use the HTML markup on the page. In the case of Text Readers this HTML is parsed and the software reads the content on the page back to the user. Now when JavaScript is enabled the markup on the page does not change, it stays exactly the same. We use JavaScript to manipulate the browsers DOM which is the model the browser generates internally to represent the page. Now in the example above the Unordered List is identified using the id “Libraries” which makes it easy to find in the DOM as this snippet of code illustrates:

  1. span class=”st0″>’Libraries’‘LI’

Here we use its identifier to find the unordered list we want to augment with our paging control. Each list element in the unordered list is what we refer to as a holding, we simply extract the contents of each of those nodes and temporarily store them in an array called libraries. In the case of the page I pointed you to earlier this array would contain 50 items.

Once we have found and copied the original complete list of libraries we can now proceed with updating the DOM to display to the user only the first 10 libraries, and render a paging navigation control. The function below updates the DOM by removing the original contents of the Libraries node, and replaces it with 10 items indexed from the specified startPosition.

  1. span class=”st0″>"Libraries"‘<UL>’‘<LI>’ + libraries[x] + ‘</LI>’‘<LI> </LI>’‘</UL>’;
  2.         renderNavigationControls();
  3.     }
  4. }
  5.  

The final line in the function above calls a method to render the navigation controls which I’ve copied below. You’ll notice that we find the placeholder identified by “pagingcontrol”, and render the preview, next and individual page links ( renderPageNumbers() ) into it.

  1. span class=”st0″>’pagingcontrol’‘<a href=\”javascript:printPreviousPage();\” alt=\”Previous Page\” title=\”Previous Page\”>< previous</a>’‘< previous’‘  ‘ + renderPageNumbers() + ‘  ‘‘<a href=\”javascript:printNextPage();\” alt=\”Next Page\” title=\”Next Page\”>next ></a>’‘next >’;
  2.         }
  3.     }
  4. }
  5.  

To reiterate, because this technique manipulates the DOM, the original markup isnt changed at all. Project Cenote was recently open sourced you can download the entire source base, which includes the javascript file, pagingcontrol.js, from which I took the extracts above, and get a feeling for how it works and experiment with it.

Now the above example might not seem all that impressive, its just a little paging control, right? Well, its a simple example that illustrates the technique which can be very powerful. Consider the following images which show a Calendar that view that was created by one of my colleagues, Andrew, which uses an unordered list to represent a series of events, and that’s what you see when JavaScript is disabled:


However with JavaScript enabled the DOM of the page is rewritten and combined with some CSS it looks like this:

Now thats more impressive, right? I hope this serves to illustrate how you can use JavaScript to provide richer user experiences without compromising accessibility. It’s an area I’ve been thinking about a lot recently and there are wider issues around how to develop better JavaScript libraries that understand how to expose dynamic content that needs to be accessible and whether this needs to be considered a best practise that more developers need to adopt. These issues beginning to have wider implications as my friend Alan put it earlier this week:

it will address issues of how traditianal HCi changes in the face of web2.0: when people are doing mash-ups (so there are more ‘artisan’ interface constructors), or end users configuring their own spaces (as in MySpace), or how you build ‘good practice’ into javascript libraries like prototoype or yahoo! YUI

.. when he contacted Justin and me earlier this week and invited us along with others from Talis to attend this years 2007 – British HCi Conference taking place at Lancaster University. Recognising how we to develop best practises for the development of truly accessible Javascript libraries is one of the issues I’ll be raising to the panel when I attend the conference.

  1. In Firefox you can achieve this by going to Tools -> Options -> Content and uncheck the box labelled JavaScript. However if you don’t already use it I recommend installing the Web Developer extensions for Firefox [back]
  2. I’ve removed the url and the title attribute and the icon image from each link to simplify the example [back]
  3. I’m using Dean’s Syntax Highlighter but specifying html means that the Snap Shots tries to execute against it, so I’ve specified XML instead [back]

Google Gears – Building Offline Web Applications

Google has released Google Gears, a new technology that is designed to overcome the single major drawback all web based applications suffer from: they don’t work without an internet connection! Google Gears is an Open Source framework, which is essentially a browser extension that is powered by JavaScript API’s that enable data storage, application caching and multi threading technologies for offline browsing and application use.

I don’t find it at all surprising that Google have invested heavily in trying to find a solution to the problem of we can use online web based applications offline. Although they launched Google Apps last year the take up has been quite slow, I read a few months back that the Commonwealth Bank has suspended a trial of Google Apps which it was thinking of rolling out to its 50,000-strong workforce, and many analysts insist that one of the major reasons for this is that there is no offline availability of these applications. Or as Carl Sjogreen, Google Senior Product Manager, sums the problem up when says:

As more and more people are depending on web applications to manage their lives and get information about what’s going on, it becomes and increasing problem when you can’t access those applications when you’re offline.

Enter Google Gears! this new technology certainly strengthens Google’s  position in going after Microsoft’s lucrative Office franchise, which makes commercial sense, More importantly though the technology actually makes the web and browsers a more attractive platform for building applications that can be used anywhere, anytim regardless of whether you have a connection to the internet or not.

Or as Eric Schmidt, CEO of Google, put it:

With Google Gears, we’re tackling the key limitation of the browser in order to make it a stronger platform for deploying all types of applications and enabling a better user experience

There decision to Open Source Gears is also quite an important one. By making the technology Open Source from a relatively early stage Google are inviting others to help improve the technology and build a community around it, and move towards developing with others an industry standard for these hybrid programmes that work both online and offline.

This is quite an exciting development.

Google Launches "Streetside View" and "Mapplets"

Google have revamped their 2D Maps with a new a feature they are calling Street View. When viewing maps of certain cities around the world you’ll get a street side view of the area your currently in, and it isn’t static! You can interact with the image to move along the street it even allows you to change your angle and move in a new direction.  Google have developed this new technology with Immersive Media, and all I can say is, it’s very very impressive.

If you want to try it out here’s a map of San Francisco that has side views, and here’s a map of Las Vegas , I really recommend trying it!

Google have also launched their new Mapplets service. Mapplets are a special kind of XML/Javascript based Google Gadget that you can add directly to Google Maps. Here’s a link to a special preview page where you can try out a handful of Mapplets. The official blurb from Google on this new feature is copied below:

Mapplets enables third party developers to create mini applications that can be displayed on Google Maps, much like Google Gadgets are displayed on iGoogle. These Mapplets contain a variety of information, from housing listings to crime data, and tools like distance measurement. Users can select from a wide range of Google and third party Mapplets to display on the Map, essentially creating their own “mashup of mashups” directly on the Google Maps site, while still enjoying the built-in functionality of Google Maps, such as local search and driving directions. A number of our partners, including WeatherBug, Booking.com and Platial have already created Mapplets.

Semantic web starting points

I’ve been getting to grips with SPARQL over the last week, probably more so than I have at any time over the last year. SPARQL is an RDF query language and one of the nice things about the Talis Platform is that you dont have to know SPARQL in order to retrieve data. So up until recently I really didnt have much need to write SPARQL queries, I could pretty much do what I wanted with the platform using its other RESTful services.

The platform however has extensive support for SPARQL, so whilst working on a spike this week it was apparant that some of the things I wanted to do would require writing some queries. I used it as an opportunity to brush up on my RDF and SPARQL – it’s amazing how easily we can take things for granted when the tools and services we use on a day to day basis hide us from low level implementation details.

If your anything like me when you want to brush up on something you you tend to scour bookshelves, or google for bits of information, which can be time consuming and sometimes a bit hit and miss. Fortunately Danny came to my rescue, he joined Talis recently and I think he’s a fantastic addition to the team. Danny sent me a link to his Semantic Web Starting Points, a collection of links he’s compiled that provide a great introduction to semantic web technologies, which obviously includes RDF and SPARQL.

Thanks Danny!

Spikes, PHP and a Platform that just works

I’ve had a pretty good week. I’ve been totally engrossed in a project I’ve been working on since getting back from Xtech last week. Essentially I’ve been working on a spike with Andrew and Hardeep to extend the functionality in our Project Cenote concept car.

The purpose of the spike was two fold. Firstly to try to better understand how to build a funky new set of features into Cenote, and secondly to allow the members of the team become familiar with and experiment with some technologies that they aren’t familiar with.

In fact it felt quite good leaving work today, having gotten to the point where the little prototype is pretty much feature complete. What’s really impressed me isn’t necessarily what it does (which is cool!), but the speed with which we’ve been able to put it all together. The spike was timeboxed to two weeks, but the reality is that the bulk of the implementation has actually been completed within the last few days. It’s by no means a production system it’s just enough to hopefully facilitate some of the discussions we hope to have both internally and externally … much like the first release of Cenote, which we Open Sourced recently.

The original version of Cenote was a read only application that allowed users to search for books and then mashed up the results with data held in various stores in the Talis Platform as well as external sources such as Amazon in order to provide users with some pretty useful information. This spike extends the original version by allowing users to use that data to create and share some really useful things.

I think there’s some important reasons why we have been able to put this together so quickly. The technology stack has been kept very simple – its just an application built in PHP5, running under Apache 2. Furthermore the application is built upon our Talis Platform which is constantly evolving and becoming more and more powerful. I’m not saying that just as someone who has worked on building that platform, I’m actually saying that as someone who has been using and consuming it’s services primarily to build applications with it.

When Rob and I originally wrote Cenote, we were both impressed at how easily we were able to use the platform, as it was then, to put together a cool looking application in the space of a couple of days. If that impressed me at the time, then I’m doubly impressed at how simple it’s been to create an application that supports creation, deletion and updating of data.

Google Tech Talk: Away with applications: The death of the desktop

The computer desktop metaphor is ubiquitous, but how much work do we get done there? None! … all Time is entirely wasted navigating or shuffling content to the application in which we can finally work. What lessons can we learn from designing interfaces without the desktop and without applications? Is it even possible? And how does this apply to the Web? Currently, Web applications are often more usable than their desktop-based counterparts because each one does one thing and does it well.

Aza Raskin gives this excellent talk which is really about human computer interaction and usability. For those who don’t know Aza is the son of Jeff Raskin the guy who started the Macintosh project at Apple.

Aza’s offers some very useful views on User interface design, he touches on GOMS Models, Cognetics, Habituation in a wonderfully easy to follow manner. In this talk he outlines how we can get rid of the application centric model which comes from the desktop design paradigm in order to free functionality that can be made accessible using a ZUI along with a universal method for accessing functionality.

Applications are like walled cities that hoard their functionality, but we need to give that functionality away so others can use it wherever they are. But to facilitate this Aza argues that we need a universal access interface. Web services give you a separation between the UI and the Data but up until now services are really available to developers, they’re not really intended for end users but can we expose them through CLI’s?. He proposes a synthesis between GUI’s and CLI’s and from what he says they’re having a great success some of the examples he shows are compelling. I for one can see the value of this. In fact we’ve already put it into practise about six months ago.

You see this was something Rob and I thought about when we developed Project Cenote, one of the features of the user interface is that the browser’s URL line is an interface in its own right. For example if you type this into the url line:

http://cenote.talis.com/author/gemmell

And the application will perform a search for all items that were authored by “gemmell”. So if your like me and you just want to get to the content your interested in you can use this as opposed to navigating around the site and entering search terms into a search box. It is basically a Command Line Interface, and I think this is a wonderful way of giving end users access to content without necessarily forcing them to always use a GUI.

I was amused when one Aza paraphrased Asimov’s Three laws of Robotics into Raskins Rules of Interfaces:
1. An interface shall not harm your content or, through inaction, allow your content to come to harm.
2. An interface shall not waste your time or require you to do more work than is strictly necessary.
3. An interface shall not allow itself to get into a state where it cannot manipulate content.

This is a great talk to listen to and full of some very useful tips.