The Role of Testing and QA in Agile Software Development

In our development group at Talis We’ve been thinking a lot about how to test more effectively in an agile environment. One of my colleagues, sent me a link to this excellent talk by Scott Ambler which examines the Role of Testing and QA in Agile Software Development.

Much of the talk is really an introduction to Agile Development which is beneficial to listen to because Scott dispels some of the myths around agile, and offers his own views on best practises using some examples. It does get a bit heated around the 45 minute mark when he’s discussing Database Refactoring, some of the people in the audience were struggling with the idea he was presenting which I felt was fairly simple. If you really want to skip all that try to forward to the 50 minute mark where he starts talking about sandboxes. What I will say is that if your having difficulty getting Agile accepted into your organisation then this might be a video you want to show your managers since it covers all the major issues and benefits.

Here’s some of the tips he has with regard to testing and improving quality:

  • Do Test Driven Development, the unit tests are the detailed design, they force developers to think about the design. Call it Just-in-time design.
  • Use Continuous Integration to build and run unit tests on each check-in to trunk.
  • Acceptance Tests are primary artefacts. Don’t bother with a requirements document simply maintain the acceptance test since the reality is that all testing teams will do is take that requirement and copy it into an acceptance test, so why introduce a traceability issue when you don’t need it. http://www.agilemodeling.com/essays/singleSourceInformation.htm
  • Use Standards and Guidelines to help ensure teams are creating consistent artefacts.
  • Code Reviews and Inspections are not a best practise. They are used to compensate for people working alone, not sharing their work, not communicating, poor teamwork, and poor collaboration. Guru checks output is an anti-pattern. Working together, pairing, good communication, teamwork should negate the need for code reviews and inspections.
  • Short feedback loop is extremely important. The faster you can get testing results and feedback from stakeholders the better.
  • Testers need to be flexible, willing to pick up new skills, need to be able to work with others. They need to be generalising specialists. The trend that is emerging in agile or the emerging belief is that there is no need for traditional testers.

Scott is a passionate speaker and very convincing, some of the points he makes are quite controversial yet hard to ignore – especially his argument that traditional testers are becoming less necessary. I’m not sure I agree with all his views yet he has succeeded in forcing me to challenge my own views which I need to mull over and for that reason alone watching his talk has been invaluable.

Open Access and an example of how it can work in education

I’ve been thinking a lot about Open Access, Open Content and indeed Open Data for a while, they are all interrelated issues that were thinking about a lot at Talis. It’s true to say that the Open Data issue is probably the one we are focusing on primarily at the moment, in fact one of my colleagues Paul is giving a talk1 on that exact subject at XTech in a couple of weeks, and another of my colleagues, Rob, presented his thoughts2 on Open Data at EUSIDIC last month, and they’ll both be sitting on a panel discussing Open Data at WWW2007 next week in Banff.

Right now though I want to talk about Open Access and a little on Open Content.

Knowledge should be free and open to use and re-use – that’s something I believe.

There has always a been a desire amongst academics, in fact its more of a tradition, to publish their research in journals without payment but rather for sake of inquiry and sharing that knowledge. Is it altruism alone that motives these authors, these researchers? I like to believe that it is the main reason 🙂 . However I recognise that Open Access offers these individuals tangible benefits and advantages3. For one thing studies have shown4 that openly accessible articles and papers are more likely to be cited than those which are locked away behind subscriptions – accessible only to those either willing to pay for that privelege or belonging to a closed community able to gain access to them .

Open Access should make sense because openly accessible article can be harvested and indexed by search engines and can be viewed by anyone, anywhere. If your researching into a subject and come across a text you want to read there isnt a barrier preventing you from gaining access to that item.

Back in 1995 Steve Harnad wrote a seminal piece entitled the “Subversive Proposal”5 which called upon authors of esoteric writings to archive them for free online in anonymous FTP archives or on websites). His belief was that as soon as all research authors publicly self archived their refereed and unrefereed papers online, then research literature would be free and accessible to all. There was great debate around this proposal and at the time it was the commonly held view that what Harnad was asking for was naive and flawed, I managed to find an excellent retrospective piece by Richard Poynder that discusses the impact of the Subversive Proposal6 , and the history that lead up to it.

Over a decade later the Open Access movement has gained a great deal of momentum which is now threatening the entire scholarly publishing industry, there’s numerous Open Access inspired toolkits and services that are enabling authors to self archive content which is then freely available to all. Yet critics of Open Access still maintain that the pay-for-access model is necessary … but I guess when you consider that the scholarly publishing business is worth an estimated $6 billion, it’s not hard to understand why they are so opposed to this.

I felt compelled to share my thoughts today, after watching a TED Talk by Richard Baraniuk7, in which he passionately argues that textbooks and educational materials that are used in schools should be made available to all through a vast interconnected repository – allowing anyone to use the information, improve it, and not only bringing the authors, who are often academics, closer to those using their material but encouraging more people to share their knowledge in this new ecosystem. It’s not hard to see how you could abstract this out further to encompass all scholarly articles and not just textbooks. I guess this is were Open Access and Open Content become a little blurred for me but that’s only because what Richard is proposing is not only allowing people free access to these works but empowering users to mix content together to create customised works made up of different constituent parts whilst crediting the authors of each of those parts – and that’s really interesting!

Richard is the founder of Connexions which is an environment for collaboratively developing, freely sharing, and rapidly publishing scholarly content on the Web under the Creative Commons8 license. I think it’s a wonderful example of how Open Access and Open Content can be successful. Connexions is focused entirely on developing teaching materials and whilst this is only a small subset of all scholarly publishing it’s still an extremely compelling and inspiring initiative which is gaining pace. Add to this the notion of on-demand publishing where students who want an up to date physical manifestation of a book can purchase one for a significantly cheaper price than they would have paid had the title been produced by a traditional publishing company, since the middle man is effectively cut out of the loop.

When you consider that most academics who write textbooks don’t actually make a significant amount from that it’s understandable why they might wish to participate in initiatives like Connexions, most of these individuals dont write textbooks necessarily for money but to make an impact, and this type of system makes their work accessible to more people thus increasing the potential impact.

Or is my naivety showing?

  1. Opening the Silos: sustainable models for open data[back]
  2. The outlook and the Future [back]
  3. Online or Invisible? [back]
  4. The effect of open access and downloads on citation impact [back]
  5. Subversive Proposal[back]
  6. Ten Years After by Richard Poynder[back]
  7. Goodbye Textbooks; hello, open source learning[back]
  8. Creative Commons[back]

Microsoft Silverlight

Although I had heard that Microsoft had announced it was going to release Silverlight, I hadn’t actually had the time to look into it. Got back from the gym this evening and decided I’d find out a bit more about it. I have to confess at first glance im very impressed.

First things first you can find out more about it at the Official Product Homepage. You can download Silverlight Community Technology preview from here. Once installed you’ll need to restart your browser and now if you go back to the Silverlight product homepage and click on the video in the centre of the screen to view it in the embedded Silverlight player.

You can also click on this link to view a demo of the Silverlight Page Turn media feature. This demonstrates how Silverlight uses XAML to create a presentation of images. When the demo loads, hold down your mouse button on the page turn icon and drag the page as though you were actually turning a page in a book. It’s quite nice.

On their own these two demo’s dont really give you a huge insight into where MS is headed with this product. Naturally its being positioned to compete with Adobe’s Flash. However to get a feel of just how far Microsfot has come watch this video! This video shows the power of MS Expression Media Encoder and Silverlight working together. The Real Time video editing capabilities using hardware graphics acceleration is really impressive. So too is the ability to create and stream media with chapter links, so you can jump to predefined points in the stream, by embedding meta data into the video’s.

By far the most impressive feature is called Video Brush, that in the demo is used to overlay a video on a jigsaw, here’s the impressive thing you can move the individual pieces of the jigsaw around and the video is still plays inside the pieces. Whilst this is eye candy, it could have amazing real world uses … picture in picture over the web for example!

Learn more about Expression Media Encoder and SIlverlight over at Tim Sneath’s Blog.

Premier Google Apps Demonstration

Came across this video on the Official Google channel on YouTube. It’s a tutorial that demonstrates how organisations can improve their productivity by adopting Google App’s. It show’s some of the customisation capabilities Google are providing for organisations to brand the apps. There’s heavy focus on how you can use Gmail for more than just email – e.g. instant messaging, calendars etc. There’s also a great deail of emphasis on the collaboration features in the tools, which are quite impressive. They also demonstrate how easy it is to for administrators to configure the tools, branding and permissions. The premier addition also allows organisations to switch of adverts.

Google Tech Talk : One Laptop Per Child

The aim of OLPC is to change how kids learn.

Ivan Krstic, Chief Security Architect at OLPC gives a technical talk on how the laptop was designed and how they are going about building it. He goes to great length to explain why they are doing this, the rationale behind the project, and why this influenced many of the technical decisions.

How do you build laptops for kids?

The Original XO-1 laptop has the following spec:

  • Geode GX-500 1.0W, 366Mhz,16kb L1, cache no L2
  • 128 MB RAM
  • 512 MB NAND Flash

The newer version has the following hardware spec:

  • AMD Geode LX-700 0.8W, 433Mhz, 128KB L1, 128KB L2.
  • 256 MB Ram
  • 1024 MB NAND flash

The laptop has no moving parts which helps keep the power usage down. It;s peak power consumption is 4-5W, the standard consumption is closer to 1-2W. Compare this to a normal conventional laptop which is around 40 – 50W. One of the things that stands our for me in this talk is that the OLPC team and doing what is probably the most aggressive work in Power Management using Linux anywhere in the world. In order to conserve more power they’re goal is to suspend the machine every 2 to 3 seconds if nothing on the screen is changing. They actually target they have set is to be able to suspend and resume the machine at the edge of human perception which is ~100ms. That’s incredible!

If you set aside the social aspects of this project and focus purely on the technical goals they’re attempting to achieve, the OLPC project could radically change the way laptops are built. It’s well worth watching the talk, there’s a number of other unique advancements the project has made, and I for one will be keeping a close eye on its development.

Google buys DoubleClick for $3.1 Billion

It’s official, Google has acquired on-line advertising outfit DoubleClick for $3.1 billion. The sky high price though may be less a function of DoubleClick’s actual worth and more about what it can strategically provide for Google – and what it could have done for Microsoft, who were also bidding for the company.

Through this acquisition Google has gained a vibrant advertising business for banners, videos and other so-called display-ads intended to promote brands rather than to generate immediate sales. It’s widely known fact that DoubleClick has relationships with almost every major online publisher and almost half of all online ad agencies. This means that Google can now go head to head with its main search rival Yahoo! in the display advertising business.

To get an idea of why this is so important, analysts predict that the paid search advertising market will account for more than 40% of the $19.5 billion expected to go to on-line advertising this year (Mar. 7 eMarketer report).

David Rosenblatt, CEO of DoubleClick, made an interesting comment about this acquisition – he’s excited at the prospect of using DoubleClick’s relationships and Google’s targeting to sell off-line ads in the future. He also believes that DoubleClick’s existing clients wont think of this as a threat, but as a tool that makes advertising easier : ” I think they will see this as a best-of-breed combination – the leading platform technology provider and the leading monetization engine”.

Even more power for Google.

Web Apps can never be desktop replacements … ?

Came across this article over at madpenguin.org. The author, Matt Hartley, argues that Web Apps will never be desktop replacements. After reading the article I think the most compelling arguments he provides are:

  1. In order to use a web based application you have to have an internet connection. Broadband outages mean you can’t be dependent on them. When your offline you can’t use them.
  2. There are privacy issues to think about. Your effectively handing your data to a third party and relying on the fact that they will not abuse it.

Firstly, I think never is a long time 😉

I am not sure if either would dissuade me from using web based applications instead of desktop ones. I already use Google’s web based applications and I think they are pretty good in terms of delivering my day to day needs. I suppose if I’m honest I’m hard pressed to think of what I do with a word processor or spreadsheet on a daily basis that I can’t do using these applications.

As for availability. I can’t remember the last time I suffered from an internet outage that prevented me from getting on line for any significant amount of time. I certainly can’t remember any time I’ve tried to use one of Google’s applications to find that it was down or unavailable, or one of 37Signals applications.

Speed of response as I see it is a big stumbling block for web based applications. I’ve not experienced many such issues using Google applications, but I know how much I get irritated when I’m sitting there waiting for a page to load in BaseCamp for example. However, any organisation, worth its salt, that is serious about providing software as a service over the web has to consider the responsiveness of its software as a key metric in gauging the applications success, because users using it will.

Data security is a bit a funny topic, if you consider the prevelance of behaviour logging spyware on most computers , I’m not convinced the average persons data would be more secure on their own PC, or even works machine. I suppose it feel comforting to think that your somehow responsible for your own data but Microsoft is Microsoft, spyware is spyware, rootkits are rootkits and hackers are hackers.

Realistically thought, it’s certainly going to be a while before people will actually bring themselves to trust third party company’s with their corporate data. Any form of outsourcing raises questions. Google is making some inroads with its Google Apps premium service – which basically allows companies to have their corporate email provided by Google, and use slightly richer versions of Google’s web based applications as opposed to Microsoft Office.

Ultimately, I do wonder though if the reality around the viability of this transition from desktop to web is less about the technical issues but more the commercial ones:

Web apps will slowly replace desktop apps so long as desktop apps fail to turn the same profit that web apps and subscription services can. To some extent we can figure in the level to which users acquiesce to the transfer but the fact simply is that there are larger entities than end users calling the shots on this one. It’s like pushing a bill through Congress: if at first they don’t succeed they’ll launch a campaign to poll the public for the conflicting arguments, they’ll pay enough lip service to make people think that the issues have been resolved, and then they’ll resubmit next year. If the major business partners on Wall Street decide that they’re making more money from companies which offer web based applications then, slowly but surely, venture capital will be steered away from desktop application vendors and to world wide web application providers. We, the end users, have no control over this.

Yahoo Widgets 4.0 Released

Yahoo Widgets 4 is now available for download. The new version makes significant performance improvements over earlier releases. They have also finally provided a built in method of updating widgets, as well as discovering new ones – something I’ve been keenly looking forward to, since I had to implement my own update listener for a set of widgets I wrote last year.

Also new in this verison is the Widget Dock, which is very similar to the Google and Vista Dock bars, basically allowing you to manage and neatly organise widgets on the desktop. I also like the new Flickr widget which is bundled with version 4. The widget not only streams photos from Flickr directly to your desktop it also provides drag and drop upload capability so you put your photos onto Flickr with relative ease.

From a development point theyve made massive performance improvements with DOM traversal of XML and XPath seems substantially faster against large XML DOM’s – which is something I used to whinge about a lot. You can view a list of all the changes in the versions release notes.

All in all, its not bad!