Semantic web and other ramblings with some fellow Talisians over a curry …

Danny AyersHad a wonderful evening tonight, Ian invited us all out for a curry with Danny and his lovely wife Caroline. Unfortunately due to the short notice fewer of us were able to attend than I suspect Ian had hoped for. In fact it was only Danny, Caroline, Ian, Amanda and myself.

Danny and Caroline are both wonderful people – both are self confessed geeks and each has a diverse range of interests. I’ve been looking forward to being able to catch up with Danny; the last time we had a chat was at the Talis Summer Ball but that evening was full of frolics and not really the forum for any meaningful conversations about the future of the semantic web, the flexibility of RDF or FOAF and it’s value.

Danny’s a tinkerer – he likes to play with things, he likes to experiment with ideas and create things, which means he  looks for new ways of thinking about old problems and in doing so I think he comes up with equally novel ideas on how to solve those problems. I really like that. Some of his ideas are fascinating, some just scary and others are simply beyond my grasp (but I im not worried about that as Danny said I’m still just a bloody youngster!) . Of course all this could just mean that he’s a total nutter … but guess what? if he is… then he’s come to the right place! 😉

I think I learnt a lot this evening about Danny, and a great deal from talking to him. He certainly has that infectious enthusiasm we’ve come to kind of expect from everyone in our team. But to top it all off he’s a great guy and I’m really looking forward to working more closely with him and hopefully learning a lot more from him now that he’s part of our team at Talis.

It’s been a great evening and I had a wonderful time! 🙂

Continuous Integration with PHP

I’m in the process of setting up a continuous integration environment for a new PHP project I’m starting. On our previous project, which was Java based, we used the following tools to support similar requirements on that project in order to allow us to implement the project using a test driven approach and automate build generation:

  • Cruise Control – For setting up a Continuous Build process.
  • Ant – A Java based build tool.
  • JUnit – A Java Based xunit testing framework
  • PMD – A tool that checks for common coding standards violations.
  • Corbetura – A Java based code coverage too that calculates the percentage of code accessed by unit tests

I’ve managed to set up an analogous process for our PHP project using the following tools:

  • Cruise Control – For setting up a Continuous Build process.
  • Ant – A Java based build tool.
  • PHPUnit – An Xunit testing framework for PHP
  • PHP_CodeSniffer – A PHP tool that checks for common coding standards violations.
  • Xdebug – Debugging tool, which when combined with PHPUnit, can provide Code Coverage Metrics

It seems to work quite well, here’s the relatively simple ant build script that controls it all.

  1.  
  2. <?xml version="1.0" encoding="UTF-8"?>
  3. <project default="all" name="DummyProject" basedir=".">
  4.     <target name="all" depends="clean, init, test, sniff" />
  5.  
  6.     <target name="clean">
  7.         <delete dir="doc/CodeCoverage" />
  8.         <delete dir="doc/UnitTestReport" />
  9.     </target>
  10.  
  11.     <target name="init">
  12.         <mkdir dir="doc/CodeCoverage" />
  13.         <mkdir dir="doc/UnitTestReport" />
  14.     </target>
  15.  
  16.     <target name="test" description="Run PHPUnit tests">
  17.         <exec dir="./" executable="TestRunner.bat" failonerror="true">
  18.         </exec>
  19.     </target>
  20.    
  21.     <target name="sniff" description="">
  22.         <exec dir="./" executable="Sniffer.bat" failonerror="true">
  23.         </exec>
  24.     </target>
  25. </project>
  26.  

I’m currently running this on a windows machine although it’s trivial to change it work in an *ix based environment which I’ll probably configure in the next day or so. I had a couple of problems installing PHP_CodeSniffer although it was because I hadn’t installed PEAR properly. If you have any problems installing PHP_CodeSniffer under Windows then follow these instructions:

To install PEAR under windows do the following, which assumes you have PHP5.2x installed in c:\php :

  cd c:\\php
  go-pear.bat

The interactive installer presents you with some options, if you follow the defaults you should be fine.
Once PEAR has installed you can install PHP_CodeSniffer like this:

  cd c:\\php
  pear install PHP_CodeSniffer-beta

This will download the PHP_CodeSniffer package and install into into your php/PEAR folder.

Once this is done you can check to see if it has installed by calling phpcs with -h flag which will produce the following:

C:\\php>phpcs -h
Usage: phpcs [-nlvi] [--report=] [--standard=]
    [--generator=] [--extensions=]  ...
        -n           Do not print warnings
        -l           Local directory only, no recursion
        -v[v][v]     Print verbose output
        -i           Show a list of installed coding standards
        --help       Print this help message
        --version    Print version information
               One or more files and/or directories to check
         A comma separated list of file extensions to check
                     (only valid if checking a directory)
           The name of the coding standard to use
          The name of a doc genertor to use
                     (forces doc generation instead of checking)
             Print either the "full" or "summary" report

Now try it out …

C:\\php>phpcs C:\\Projects\\dummyproject\\test\\

FILE: C:\\Projects\\dummyproject\\test\\AllTests.php
--------------------------------------------------------------------------------
FOUND 7 ERROR(S) AND 0 WARNING(S) AFFECTING 7 LINE(S)
--------------------------------------------------------------------------------
  1 | ERROR | End of line character is invalid; expected "\n" but found "\r\n"
  2 | ERROR | Missing file doc comment
  3 | ERROR | Constants must be uppercase; expected 'PHPUNIT_MAIN_METHOD' but
    |       | found 'PHPUnit_MAIN_METHOD'
 12 | ERROR | Missing class doc comment
 14 | ERROR | Missing function doc comment
 19 | ERROR | Missing function doc comment
 30 | ERROR | Constants must be uppercase; expected PHPUNIT_MAIN_METHOD but
    |       | found PHPUnit_MAIN_METHOD
--------------------------------------------------------------------------------

Check the documentation for what the various command line switches do, but you can generate summary reports as well on an entire directory tree:

C:\\php>phpcs --report=summary --standard=Squiz C:\\Projects\\dummyproject\\test\\

PHP CODE SNIFFER REPORT SUMMARY
--------------------------------------------------------------------------------
FILE                                                            ERRORS  WARNINGS
--------------------------------------------------------------------------------
C:\\Projects\\dummyproject\\test\\AllTests.php                  24      0
C:\\Projects\\dummyproject\\test\\NodeTest.php                  10      0
C:\\Projects\\dummyproject\\test\\NodeTypeTest.php              11      0
--------------------------------------------------------------------------------
A TOTAL OF 45 ERROR(S) AND 0 WARNING(S) WERE FOUND IN 3 FILE(S)
--------------------------------------------------------------------------------

Overall I’m quite happy with this set up which for the most part was pretty straight forward. I have no doubt it will evolve over time but I think it’s a good foundation on which to build upon.

Greeks vs Romans. Adaptive vs Plain-Driven development.

Came across this wonderful essay over at Hacknot today. The essay starts off by decrying this assertion made by Raghavendra Rao Loka, in February’s 2007 edition of IEEE Software:

“Writing and maintaining software are not engineering activities. So it’s not clear why we call software development software engineering.”

The author of the essay goes onto offer a brief rebuttal of this this based on some comments by Steve McConnell, and pointed out quite rightly in my opinion that:

Software development is slowly and surely moving its way out of the mire of superstition and belief into the realm of empiricism and reason. The transition closely parallels those already made in other disciplines, such as medicine’s evolution from witchcraft into medical science.

What I found really insightful though was how the author likened these two views ( Loka vs McConnell ) to another conflict:

These two represent the age-old conflict between the engineers and the artists, the sciences and the humanities. In the software development domain, some have previously characterized it as the battle between the Greeks and the Romans.

He then applies this same metaphor to the wider issue of adaptive vs the plain-driven approaches to software development and in doing so offers an interesting perspective on the two schools of thought:

By now you will probably have recognized the analogue between the cultural divide separating the Greeks and Romans and the methodological divide between adaptive and plan-driven approaches to software development.

We can think of the Greeks as representative of Agile Methods. The focus is upon loosely coordinated individuals whose talent and passion combine to produce great artefacts of creativity. Any organizational shortcomings the team might experience are overcome by the cleverness and skilful adaptivity of the contributors.

Alternatively, we can consider the Romans as representative of plan driven methods, in which the carefully engineered and executed efforts of competent and well educated practitioners combine to produce works of great size and complexity. The shortcomings of any of the individuals involved are overcome by the systematic methods and peer review structure within which they work.

It’s a wonderful analogy that I think illustrates some of the differences between the two approaches in a way that most practitioners would readily understand and perhaps even agree with. Which is the right approach? I guess it depends on you, your project but most importantly the kind of people you have in your team. I have certainly spent a considerable amount of time extolling what I believe are the virtues of the agile methodology on this blog.

I spent several years working in a very Roman-esque organisation and I’d probably argue that the competent and well educated practitioners in such organisations rarely fit either of those terms. Probably because the very nature of such systems require those working within that confine to accept a certain level of conformity. Consequently individual flair, creativity or even imagination are less important than the uniformity that such organisations require – a sort of fill-in-the-blanks approach to development where your responsible for only those small pieces assigned to you, and whether you necessarily know what the bigger picture is or even understand it isn’t important because some more senior in the team who assigns you your tasks does know. This often results in developers feeling disaffected or perhaps less likely to feel any sense of personal ownership or even responsibility, because they know they aren’t in a position to be responsible for anything.

I do find myself agreeing that part of what makes agile teams successful is this notion of heroics on the part of individuals. It does work well when you have talented individuals who can work together in a reasonably small scale. How well does that scales up? … I can’t personally answer that question?  I’ve read plenty of accounts and listened to the likes of Scott Ambler explain how agile can work on large scale projects. I must admit I’m not sure if I’m convinced of this. … but that’s only because of my earlier point that this decision needs, in part, to be based on the type of people you have your in team.

Our development group at Talis is quite small,at the moment no more than 20 people – and it’s not for a lack of trying, were constantly trying to recruit people but we look specifically for skilled developers who could fit into the culture that we have. Individuals who are self motivated, self learners who take a great deal of personal pride in what they do, take responsibility and also want to have a sense of ownership over what they are building. As a result we tend to be quite jealous of who we let into the team, but again that’s probably something that is unique to our little group and says more about us than the Agile process.

I do find myself partly agreeing with the conclusion the author of the essay makes:

I favour a development method that is predominantly Greek, with sufficient Roman thrown in to keep things under control and prevent needless wheel spinning. The Greeks are a great bunch of guys, but they tend to put too much emphasis on individual heroics, and pay too little attention to the needs of the maintenance programmers that will following in their path. The Romans are a little stuffy and formal, but they really know how to keep things under control and keep your project running smoothly and predictably

 I don’t agree that, in terms of software development,  the Roman approach was ever truly able to keep things under control or necessarily running smoothly even predictably. If it was we wouldn’t have so many examples of failed projects that went over budget, went over time and totally failed to deliver what the customer actually wanted.

I personally do prefer a predominantly Greek approach and I do see methodologies such as SCRUM imposing the right level of Roman-esque control and structure to keep the project moving along smoothly and predictably, with the added bonus that the customer actually gets what they want.

Talis Summer Party

Had a wonderful evening last night at the Talis Summer Ball. The theme was a seventies, carribbean disco and there were some amazing costumes. I wish I had taken my camera with me, fortunatly plenty of others did and I’ll try to link through to some of their pictures when they are available.

It was a really wonderful evening!

Twelve months at Talis

So much has been lost, so much forgotten. So much pain, so much blood. And for what, I wonder. The past tempts us, the present confuses us, and the future frightens us. And our lives slip away, moment by moment, lost in that vast terrible inbetween. But there is still time to seize that one last fragile moment. To choose something better, to make a difference ... and I intend to do just that.

Yesterday was an anniversary of sorts, it marked the end of my first twelve months here at Talis. I was chatting to Ian on the train home last night and as always he asked me how I’m getting on, and I’m happy to say that im still loving everything im doing. I’ve learnt more this last twelve months than in the five years I spent at my previous job. It’s an fantastic environment and its been incredible working alongside a group of extremely talented geeks. I feel like I have grown a lot this past twelve months and not only learnt new things but learnt new ways of thinking about problems.

When I joined Talis I was looking for a new beginning, I guess there was a part of me that needed to start over and that needed more than anything else to forget the past. I certainly found that new beginning here and it’s helped me to find my direction again and a sense of purpose … it’s helped me make some wonderful new friends along the way … but most of all I finally feel like I’m actually making a difference.

JavaScript can provide a richer user experience without compromising Accessibility.

When Rob and I originally put Cenote together one of the constraints we had to work within was that the application must work without relying on JavaScript. This is primarily because relying heavily on JavaScript can introduce some accessibility issues, these issues include:

  • Making Navigation difficult: Site’s that rely on JavaScript can make it very difficult to navigate around a site using a keyboard or other assistive technologies, and impossible if JavaScript is disabled in the browser.
  • Hides Content: Sites that rely heavily on JavaScript can present content and functionality to the user that is not accessible to assistive technologies, such as Text Readers.
  • Removes User Control: User’s have no control over automated content changes. For user’s have no accessibility needs this isn’t a problem, but for users who rely on assisstive technology there is no way of knowing that content has changed. This can prove to be very confusing/disorienting since this can also involve altering normal browser functionality and triggering events that the user is not aware of.

This doesn’t necessarily mean that pages that contain JavaScript cannot be fully accessible, such pages can be accessible if the functionality is device independent ( in other words it does not require only a mouse or only a keyboard to work ) and the content is always available to assistive technologies.

So based on this knowledge Rob and I made the conscious decision that what we would do was develop Cenote first without using any JavaScript at all. This would ensure that the content rendered on the pages was always available to assistive technologies. We agreed that we could then use JavaScript to enhance the user experience by using JavaScript to manipulate the DOM of the page in the browser.

To see this in action visit this page in Cenote, if you scroll to the bottom of the page you’ll notice a list of libraries that you navigate through using a paging control (pictured). You’ll also notice that the book jacket in the top left hand corner of the page that has a nice reflection effect. Both these effects are made possible through JavaScript, and provide a slightly richer user experience. To see what I mean try turning off JavaScript in your browser1, and hit refresh. The page now displays the book jacket without the reflection and the complete list of libraries without the paging control.

So how is all this achieved? Well lets take a closer look at how the paging control. The snippet of html below shows a simplified2 version of markup that actually represents the items in the list3, take particulare note of the empty div after the unordered list this is the placeholder we inject the paging control into:

  1.  
  2. <ul id="Libraries">
  3.   <li><a class="item-library" href="#">Aberdeenshire Libraries</a></li>
  4.   <li><a class="item-library" href="#">Barnsley MD Library</a></li>
  5.   <li><a class="item-library" href="#">Blackburn</a></li>
  6. </ul>
  7. <div id="pagingcontrol"></div>
  8.  

Now when you view the page without JavaScript enabled the browser renders this markup exactly as it is. Accessibility or Assitive technology only use the HTML markup on the page. In the case of Text Readers this HTML is parsed and the software reads the content on the page back to the user. Now when JavaScript is enabled the markup on the page does not change, it stays exactly the same. We use JavaScript to manipulate the browsers DOM which is the model the browser generates internally to represent the page. Now in the example above the Unordered List is identified using the id “Libraries” which makes it easy to find in the DOM as this snippet of code illustrates:

  1. span class=”st0″>’Libraries’‘LI’

Here we use its identifier to find the unordered list we want to augment with our paging control. Each list element in the unordered list is what we refer to as a holding, we simply extract the contents of each of those nodes and temporarily store them in an array called libraries. In the case of the page I pointed you to earlier this array would contain 50 items.

Once we have found and copied the original complete list of libraries we can now proceed with updating the DOM to display to the user only the first 10 libraries, and render a paging navigation control. The function below updates the DOM by removing the original contents of the Libraries node, and replaces it with 10 items indexed from the specified startPosition.

  1. span class=”st0″>"Libraries"‘<UL>’‘<LI>’ + libraries[x] + ‘</LI>’‘<LI> </LI>’‘</UL>’;
  2.         renderNavigationControls();
  3.     }
  4. }
  5.  

The final line in the function above calls a method to render the navigation controls which I’ve copied below. You’ll notice that we find the placeholder identified by “pagingcontrol”, and render the preview, next and individual page links ( renderPageNumbers() ) into it.

  1. span class=”st0″>’pagingcontrol’‘<a href=\”javascript:printPreviousPage();\” alt=\”Previous Page\” title=\”Previous Page\”>< previous</a>’‘< previous’‘  ‘ + renderPageNumbers() + ‘  ‘‘<a href=\”javascript:printNextPage();\” alt=\”Next Page\” title=\”Next Page\”>next ></a>’‘next >’;
  2.         }
  3.     }
  4. }
  5.  

To reiterate, because this technique manipulates the DOM, the original markup isnt changed at all. Project Cenote was recently open sourced you can download the entire source base, which includes the javascript file, pagingcontrol.js, from which I took the extracts above, and get a feeling for how it works and experiment with it.

Now the above example might not seem all that impressive, its just a little paging control, right? Well, its a simple example that illustrates the technique which can be very powerful. Consider the following images which show a Calendar that view that was created by one of my colleagues, Andrew, which uses an unordered list to represent a series of events, and that’s what you see when JavaScript is disabled:


However with JavaScript enabled the DOM of the page is rewritten and combined with some CSS it looks like this:

Now thats more impressive, right? I hope this serves to illustrate how you can use JavaScript to provide richer user experiences without compromising accessibility. It’s an area I’ve been thinking about a lot recently and there are wider issues around how to develop better JavaScript libraries that understand how to expose dynamic content that needs to be accessible and whether this needs to be considered a best practise that more developers need to adopt. These issues beginning to have wider implications as my friend Alan put it earlier this week:

it will address issues of how traditianal HCi changes in the face of web2.0: when people are doing mash-ups (so there are more ‘artisan’ interface constructors), or end users configuring their own spaces (as in MySpace), or how you build ‘good practice’ into javascript libraries like prototoype or yahoo! YUI

.. when he contacted Justin and me earlier this week and invited us along with others from Talis to attend this years 2007 – British HCi Conference taking place at Lancaster University. Recognising how we to develop best practises for the development of truly accessible Javascript libraries is one of the issues I’ll be raising to the panel when I attend the conference.

  1. In Firefox you can achieve this by going to Tools -> Options -> Content and uncheck the box labelled JavaScript. However if you don’t already use it I recommend installing the Web Developer extensions for Firefox [back]
  2. I’ve removed the url and the title attribute and the icon image from each link to simplify the example [back]
  3. I’m using Dean’s Syntax Highlighter but specifying html means that the Snap Shots tries to execute against it, so I’ve specified XML instead [back]

Semantic web starting points

I’ve been getting to grips with SPARQL over the last week, probably more so than I have at any time over the last year. SPARQL is an RDF query language and one of the nice things about the Talis Platform is that you dont have to know SPARQL in order to retrieve data. So up until recently I really didnt have much need to write SPARQL queries, I could pretty much do what I wanted with the platform using its other RESTful services.

The platform however has extensive support for SPARQL, so whilst working on a spike this week it was apparant that some of the things I wanted to do would require writing some queries. I used it as an opportunity to brush up on my RDF and SPARQL – it’s amazing how easily we can take things for granted when the tools and services we use on a day to day basis hide us from low level implementation details.

If your anything like me when you want to brush up on something you you tend to scour bookshelves, or google for bits of information, which can be time consuming and sometimes a bit hit and miss. Fortunately Danny came to my rescue, he joined Talis recently and I think he’s a fantastic addition to the team. Danny sent me a link to his Semantic Web Starting Points, a collection of links he’s compiled that provide a great introduction to semantic web technologies, which obviously includes RDF and SPARQL.

Thanks Danny!

Spikes, PHP and a Platform that just works

I’ve had a pretty good week. I’ve been totally engrossed in a project I’ve been working on since getting back from Xtech last week. Essentially I’ve been working on a spike with Andrew and Hardeep to extend the functionality in our Project Cenote concept car.

The purpose of the spike was two fold. Firstly to try to better understand how to build a funky new set of features into Cenote, and secondly to allow the members of the team become familiar with and experiment with some technologies that they aren’t familiar with.

In fact it felt quite good leaving work today, having gotten to the point where the little prototype is pretty much feature complete. What’s really impressed me isn’t necessarily what it does (which is cool!), but the speed with which we’ve been able to put it all together. The spike was timeboxed to two weeks, but the reality is that the bulk of the implementation has actually been completed within the last few days. It’s by no means a production system it’s just enough to hopefully facilitate some of the discussions we hope to have both internally and externally … much like the first release of Cenote, which we Open Sourced recently.

The original version of Cenote was a read only application that allowed users to search for books and then mashed up the results with data held in various stores in the Talis Platform as well as external sources such as Amazon in order to provide users with some pretty useful information. This spike extends the original version by allowing users to use that data to create and share some really useful things.

I think there’s some important reasons why we have been able to put this together so quickly. The technology stack has been kept very simple – its just an application built in PHP5, running under Apache 2. Furthermore the application is built upon our Talis Platform which is constantly evolving and becoming more and more powerful. I’m not saying that just as someone who has worked on building that platform, I’m actually saying that as someone who has been using and consuming it’s services primarily to build applications with it.

When Rob and I originally wrote Cenote, we were both impressed at how easily we were able to use the platform, as it was then, to put together a cool looking application in the space of a couple of days. If that impressed me at the time, then I’m doubly impressed at how simple it’s been to create an application that supports creation, deletion and updating of data.

Pictures from Xtech 2007

Just got home from Xtech, was only out there for two days. Must confess I would have loved to stay longer but have tonnes of work to do.

I’ve posted some of my pictures up on flickr here.

But also created the jumpcut movie below using them, enjoy …

I had a really good time in Paris!

Xtech 2007: Physical Hyperlinks

Speaker: Timo Arnall, Oslo School of Architecture and Design.

Some notes I made whilst listening to this talk.

Talk is about connecting the digital to the physical.

Physical:

  • Things
  • places
  • people

Digital

  • Content
  • Applications
  • Services

Currently being driven by mobile phones technology, phones are fast becoming universal controllers.

Context Awareness is really hard AI, raises implications over choice. People like to be able to choose

Biggest driver of this technology is advertising and marketing – holy grail of advertising.

Taking services and things back to physical products.

Design for universal access. Finding more information about products for people with impairments.

Mimic graffiti by offering new ways of authoring.

Mobile devices increase social awareness but also narrow them down to a screen – which can be a very anti social activity. Physical hyperlinks an help get us back to interacting in the physical work instead of these tiny screens.

Everyday objects start to have agency and voice in the digital world.

Technologies that would allow us to achieve this:

  • Bluetooth, used increasingly for marketing and advertising. Posters in London allow users to download advertisements. Problems with Bluetooth: works at a large range, the interaction field is large and difficult to visualise, ideal platform for location spamming. As a user I want to be able to choose what I get, not have it pushed to me.
  • SMS, e.g. Yellow Arrow. Simple to implement, but requires users to read and type codes, relies on network and you have to wait for feedback which will not be immediate.
  • Barcodes. Everything has barcodes, they are truly ubiquitous. Designed to improve the speed and accuracy of data entry. They have become universal standards, they are so ubiquitous we dont notice them anymore.

RFID

  • Seen as a replacement for barcodes.
  • RFID doesnt work when theres metal around or liquids, currently problematic in practise.
  • Tag per item or per pallet?
  • Short range technology 10cm.
  • Interaction is quick, there is an immediacy.
  • Hold up to 4kb of data.
  • Two way information, get data from tag or write info to it.
  • Often break down when there is more than one tag in the same bubble.
  • Controversial because of privacy issues, but this might be unfounded.

Applications

Timo showed many examples of how Barcodes etc. QR Codes are used in real world applications. Infrastructure can now be setup using QR Codes this is an interesting way of delivering settings to devices (i.e. network access).

Urban Seeder, uses very complex visual codes, and relies on more sophisticated pattern recognition. Lovely in the way that most people would not recognised them unless they were in the know, this might mean that people start exploring, taking pictures of patterns to see if they contained a hidden meaning.

Some of Timo’s research has shown that people using these things on the street in Norway find that it can feel very strange to be interacting with smart posters etc. because its manifestly apparant that you are to everyone passing you by … users complained it felt like picking up litter, because you very exposed. Whereas RFID is much quicker. Is this local cultural ( norway ) or could it be a gobally cultural phenomenon.

Of all the talks I’ve listened to this morning, I must confess I enjoyed Timo’s the most. Find out more about him and his research here:

www.nearfield.org