Xtech 2007 : Arrived in Paris

My fellow Talisians and I flew into Paris earlier this evening and managed to find our way, eventually, to the Novotel Paris Tour Eiffel hotel. After we checked in and freshened up we all met up in reception and wondered off towards the Eiffel Tower in search of somewhere to eat, and eventually found a lovely little restaurant where we spent the better part of the evening. It was a really pleasant evening, spent most of it laughing and joking, but also discussed what we wanted to try to achieve whilst we are all here.

The walk back to the hotel was really pleasant, in fact I remember an old friend once saying that the true soul of a city is revealed at night, and I did sense that was true of Paris, or perhaps thats just the romantic in me.

The conference doesn’t officially start until tomorrow but we are all pretty excited to be here. There’s going to be some very interesting talks on tomorrow and I suspect I’ll probably camp in Amphitheatre C for the talks on the Ubiquitous web. I’m not sure if the other’s have decided which talk’s they are going to sit in on but I have no doubt we’ll all be meeting many of the attendees here this year.

I’ll echo some comments made my colleague, and our newly promoted CTO, Ian Davis:

“So here’s the pitch from the new CTO: if you want to face the challenge of creating something world-changing; if you like the idea of a company that controls its own destiny and is small enough for you to change it; if you never want to stop learning; then I want to hear from you.”

“Come and find me at XTech this week, or grab one of the other 8 Talisians that are going and get the lowdown on what it’s really like to work here. If you can’t make XTech, then email careers@talis.com and tell them I sent you :)”

Google Tech Talk: Away with applications: The death of the desktop

The computer desktop metaphor is ubiquitous, but how much work do we get done there? None! … all Time is entirely wasted navigating or shuffling content to the application in which we can finally work. What lessons can we learn from designing interfaces without the desktop and without applications? Is it even possible? And how does this apply to the Web? Currently, Web applications are often more usable than their desktop-based counterparts because each one does one thing and does it well.

Aza Raskin gives this excellent talk which is really about human computer interaction and usability. For those who don’t know Aza is the son of Jeff Raskin the guy who started the Macintosh project at Apple.

Aza’s offers some very useful views on User interface design, he touches on GOMS Models, Cognetics, Habituation in a wonderfully easy to follow manner. In this talk he outlines how we can get rid of the application centric model which comes from the desktop design paradigm in order to free functionality that can be made accessible using a ZUI along with a universal method for accessing functionality.

Applications are like walled cities that hoard their functionality, but we need to give that functionality away so others can use it wherever they are. But to facilitate this Aza argues that we need a universal access interface. Web services give you a separation between the UI and the Data but up until now services are really available to developers, they’re not really intended for end users but can we expose them through CLI’s?. He proposes a synthesis between GUI’s and CLI’s and from what he says they’re having a great success some of the examples he shows are compelling. I for one can see the value of this. In fact we’ve already put it into practise about six months ago.

You see this was something Rob and I thought about when we developed Project Cenote, one of the features of the user interface is that the browser’s URL line is an interface in its own right. For example if you type this into the url line:

http://cenote.talis.com/author/gemmell

And the application will perform a search for all items that were authored by “gemmell”. So if your like me and you just want to get to the content your interested in you can use this as opposed to navigating around the site and entering search terms into a search box. It is basically a Command Line Interface, and I think this is a wonderful way of giving end users access to content without necessarily forcing them to always use a GUI.

I was amused when one Aza paraphrased Asimov’s Three laws of Robotics into Raskins Rules of Interfaces:
1. An interface shall not harm your content or, through inaction, allow your content to come to harm.
2. An interface shall not waste your time or require you to do more work than is strictly necessary.
3. An interface shall not allow itself to get into a state where it cannot manipulate content.

This is a great talk to listen to and full of some very useful tips.

Open Access and an example of how it can work in education

I’ve been thinking a lot about Open Access, Open Content and indeed Open Data for a while, they are all interrelated issues that were thinking about a lot at Talis. It’s true to say that the Open Data issue is probably the one we are focusing on primarily at the moment, in fact one of my colleagues Paul is giving a talk1 on that exact subject at XTech in a couple of weeks, and another of my colleagues, Rob, presented his thoughts2 on Open Data at EUSIDIC last month, and they’ll both be sitting on a panel discussing Open Data at WWW2007 next week in Banff.

Right now though I want to talk about Open Access and a little on Open Content.

Knowledge should be free and open to use and re-use – that’s something I believe.

There has always a been a desire amongst academics, in fact its more of a tradition, to publish their research in journals without payment but rather for sake of inquiry and sharing that knowledge. Is it altruism alone that motives these authors, these researchers? I like to believe that it is the main reason 🙂 . However I recognise that Open Access offers these individuals tangible benefits and advantages3. For one thing studies have shown4 that openly accessible articles and papers are more likely to be cited than those which are locked away behind subscriptions – accessible only to those either willing to pay for that privelege or belonging to a closed community able to gain access to them .

Open Access should make sense because openly accessible article can be harvested and indexed by search engines and can be viewed by anyone, anywhere. If your researching into a subject and come across a text you want to read there isnt a barrier preventing you from gaining access to that item.

Back in 1995 Steve Harnad wrote a seminal piece entitled the “Subversive Proposal”5 which called upon authors of esoteric writings to archive them for free online in anonymous FTP archives or on websites). His belief was that as soon as all research authors publicly self archived their refereed and unrefereed papers online, then research literature would be free and accessible to all. There was great debate around this proposal and at the time it was the commonly held view that what Harnad was asking for was naive and flawed, I managed to find an excellent retrospective piece by Richard Poynder that discusses the impact of the Subversive Proposal6 , and the history that lead up to it.

Over a decade later the Open Access movement has gained a great deal of momentum which is now threatening the entire scholarly publishing industry, there’s numerous Open Access inspired toolkits and services that are enabling authors to self archive content which is then freely available to all. Yet critics of Open Access still maintain that the pay-for-access model is necessary … but I guess when you consider that the scholarly publishing business is worth an estimated $6 billion, it’s not hard to understand why they are so opposed to this.

I felt compelled to share my thoughts today, after watching a TED Talk by Richard Baraniuk7, in which he passionately argues that textbooks and educational materials that are used in schools should be made available to all through a vast interconnected repository – allowing anyone to use the information, improve it, and not only bringing the authors, who are often academics, closer to those using their material but encouraging more people to share their knowledge in this new ecosystem. It’s not hard to see how you could abstract this out further to encompass all scholarly articles and not just textbooks. I guess this is were Open Access and Open Content become a little blurred for me but that’s only because what Richard is proposing is not only allowing people free access to these works but empowering users to mix content together to create customised works made up of different constituent parts whilst crediting the authors of each of those parts – and that’s really interesting!

Richard is the founder of Connexions which is an environment for collaboratively developing, freely sharing, and rapidly publishing scholarly content on the Web under the Creative Commons8 license. I think it’s a wonderful example of how Open Access and Open Content can be successful. Connexions is focused entirely on developing teaching materials and whilst this is only a small subset of all scholarly publishing it’s still an extremely compelling and inspiring initiative which is gaining pace. Add to this the notion of on-demand publishing where students who want an up to date physical manifestation of a book can purchase one for a significantly cheaper price than they would have paid had the title been produced by a traditional publishing company, since the middle man is effectively cut out of the loop.

When you consider that most academics who write textbooks don’t actually make a significant amount from that it’s understandable why they might wish to participate in initiatives like Connexions, most of these individuals dont write textbooks necessarily for money but to make an impact, and this type of system makes their work accessible to more people thus increasing the potential impact.

Or is my naivety showing?

  1. Opening the Silos: sustainable models for open data[back]
  2. The outlook and the Future [back]
  3. Online or Invisible? [back]
  4. The effect of open access and downloads on citation impact [back]
  5. Subversive Proposal[back]
  6. Ten Years After by Richard Poynder[back]
  7. Goodbye Textbooks; hello, open source learning[back]
  8. Creative Commons[back]

End of sprint, SCRUM and why I’m feeling so good

We’ve just reached the end of our eighth sprint on the project I’ve been working on. On Monday we’ll be doing our end of sprint demonstrations to customers as well as internally to the rest of the company and I have to say I’m feeling quite good about it. It’s a lovely day today feel like I need to chill (or as Rob suggested – maybe I need to get a life 😉 ) anyway I’ve been sitting here reflecting on this month and there’s a few things I want to talk about.

I’m fairly new to the SCRUM methodology, in fact this is the first project I’ve worked on that formally uses it. Our development group here at Talis has adopted the SCRUM process across all of our current projects with what I feel has been a great deal of success.

For me personally the transition from traditional waterfall approaches to agile methodologies has been a revelation in many ways. Before joining Talis I’d spent a number of years developing software based on traditional waterfall methodologies. What was frustrating with these traditional approaches was that you’d spend months capturing and documenting requirements, you’d then spend a while analysing these requirements and then designing your software product, before implementing it and then testing it. Any changes to the requirements invariably meant going through a process of change impact analysis and then going through the whole process again (for the changed requirements), which naturally increases the cost to the customer.

A side effect of which, from the perspective of the customer, was that changing requirements during the project was a bad idea, because of the extra costs it would incur. A consequence of this is that customers would often take delivery of systems which after a couple of years of development, don’t actually satisfy the requirements that they now have. These same customers would then have to take out a maintenance contract with the vendor to get the systems updated to satisfy their new requirements.

From a developers point of view, I often found this to be very demoralising, you knew you you were building something the customer didn’t really want, but the software house you work for wants to do that because they have a signed off requirements document and contract that guarantee’s them more money if the customer changes their mind. I often found that when we reached the end of a project, the delivery of that software to the customer was a very and nervous and stressful time. The customer at this point has not necessarily had any visibility of the product so there’s usually a couple of months when your organisation is trying to get them to accept the product – which invariably led to arguments over the interpretation of requirements – and sometimes scary looking meetings between lawyers from both sides.

There was always something very wrong with it.

Since joining Talis, and transitioning to agile methodologies I can finally see why it was so wrong, and why agile, and in this case SCRUM, work so well.

For one thing, I’m not nervous about the end of sprint demonstrations. 🙂 The customers have been involved all along, they’ve been using the system, constantly providing feedback, constantly letting us know what were doing well, and what we need to improve on.

Our sprints have been four weeks long, which means at the beginning of the sprint we agree which stories we are going to implement based on what the customers have asked us for, these can be new stories that have been identified, or stories from the backlog. The customers have an idea, from previous sprints, what our velocity is – in other words they, and we, know how much work we can get done in a sprint so when we pick the stories for the sprint we ensure we don’t exceed that limit. This keeps things realistic. Any story that doesn’t get scheduled in for the sprint because it was deemed less of a priority than the stories that are selected gets added to a backlog.

This iterative cycle is great! For one thing customer’s are encouraged to change their minds, to change their requirements, because they then have to prioritise whether that change is more important than the other items on the backlog. They are empowered to choose what means the most to them, and that’s what we give them ! The customer doesn’t feel like the enemy, but an integral part of the team, and for me that’s vital.

As a developer it feels great to know that your customers like your product … and why shouldn’t they, they’ve been involved every step of the way, they’ve been using it every step of the way.

I’ve only been here at Talis for ten months, and in that time I’ve had to constantly re-examine and re-evaluate not only what I think it means to be a good software developer but pretty much every facet of the process of building services and products for customers. For me it’s been an amazing journey of discovery and I’m pretty certain it’s going to continue to be for a very long time.

The really wonderful thing though is that in our development group I’m surrounded by a team of people who believe passionately that it’s the journey and how we deal with it that defines us, and not the destination. So we are constantly looking for ways to improve, that in itself can be inspiring.
So yeah … I feel good!

Our development group is always looking for talented developers who share our ethos and could fit into the culture we have here. If you’d like to be a part of this journey then get in touch with us. Some of us, including myself, will be at XTECH 2007 next month, so if your attending the conference come and have a chat with us.

Microsoft is dead.

I’ve read pretty much all of Paul Graham’s essays. I think he’s a wonderful writer and in the past we have often found ourselves debating his views at our bi-weekly geek bookclub @ Talis. One of his most seminal pieces was Hackers and Painters – which every developer should read. So, as you might imagine, I was more than a little intrigued this morning when my FeedReader listed a new essay by Paul with the contentious title: Microsoft is Dead.

Paul argues that Microsoft is no longer frightening, that the company is no longer seen as a threat, no longer casts the shadow it once did over the entire software industry:

I can sense that. No one is even afraid of Microsoft anymore. They still make a lot of money—so does IBM, for that matter. But they’re not dangerous.

Paul attributes this demise, if that’s the right word for it, to four things:

Firstly , Google. Who Paul believes is the most dangerous company now by far, in both good and bad senses of the word – using www.live.com as an example of how Microsoft is limping behind Google, continuously playing catchup.

Secondly, was the release of Gmail and the introduction of AJAX to the masses. Gmail showed how much you could do with web based software – signalling the death knell of the desktop as more and more applications are delivered over the web. Paul describes how Microsoft themselves might have contributed to the rise of AJAX, something I was previously unaware of:

Ironically, Microsoft unintentionally helped create Ajax. The x in Ajax is from the XMLHttpRequest object, which lets the browser communicate with the server in the background while displaying a page. (Originally the only way to communicate with the server was to ask for a new page.) XMLHttpRequest was created by Microsoft in the late 90s because they needed it for Outlook. What they didn’t realize was that it would be useful to a lot of other people too—in fact, to anyone who wanted to make web apps work like desktop ones.

The third cause is the widespread availability of high speed broadband Internet access – which is key to facilitating the delivery of web based applications to end users – which in turn is key to moving users away from the reliance on desktop based tools and applications.

The final nail in the coffin, Paul argues, came from Apple. The company re-defined itself and offered the world a viable alternative to Windows, in OS X. I know from personal experience that although I don’t have a Mac or a PC running native Linux, I do much of my development work in Linux VM Machines – this is partly due to infrastructure policy on our company laptops … which I should hasten to add … just changed 🙂 So when I get my shiny new laptop I can run Linux on it natively Yippee! ( I’d love it even more if I could have one of those 17″ MacBook’s … pretty please Ian if you do I’ll buy you one of these t-shirts!)

I think to a great extent Paul is actually right. But I personally wouldn’t count Microsoft out of the running. Microsoft is still a company that is capable of innovating great things. Just take a look at what’s coming out of their research labs in terms of PhotoSynth and DeepFish – to understand that they are looking to push the envelope in certain areas. Unfortunatly it does appear that these days they are reacting to innovations made by their competitors – www.live.com , and maps.live.com are great examples of two Microsoft products that are essentially late alternatives to Google Web Search, and Google Earth. Instead of leading the way, Microsoft is being forced to change it’s traditional business because others, like Google, are changing the industry around it.

I think Paul is absolutely right when he attributes part of their downfall, as it were, to their complacency as a Monopoly:

I’m glad Microsoft is dead. They were like Nero or Commodus—evil in the way only inherited power can make you. Because remember, the Microsoft monopoly didn’t begin with Microsoft. They got it from IBM. The software business was overhung by a monopoly from about the mid-1950s to about 2005. For practically its whole existence, that is. One of the reasons “Web 2.0” has such an air of euphoria about it is the feeling, conscious or not, that this era of monopoly may finally be over.

It’s been a well known fact for years that Microsoft is propped up by the profit it generates from two product lines … Microsoft Windows, and Microsoft Office. Hell even the XBOX is still making a loss for the company. It’s no accident then, that Google, have released browser based Office applications, in the form of docs.google.com. This service allows people to write and share documents and spreadsheets for free, combined with gmail and several other apps. Google are now positioning themselves as a service provider offering enhanced versions of this service for small businesses for $50 per user, per year. This will no doubt force Microsoft to change – perhaps even releasing a browser based SOA version of its Office Suite. It’s going to be interesting to see how this competition between these two giants pans out. I’m not saying that being the first in this kind of race is always the best, but if you get into a position where your forcing both the industry and your competition to react to you then thats got to be a good thing.

Take the time to read Paul’s essay. It’s not very long, but it does make some excellent points.

Firefox Extensions/Add-Ons

A few people have asked me what FireFox add-ons I use, particularly to aid development work. Here’s a list of the FireFox extensions/Add-On’s I have installed – not all of them are development oriented but I find them all very useful.

  • Download Statusbar
    Allows you to view and manage downloads from a tidy statusbar, without that pesky download window getting in the way of your browsing
  • Fangs
    This extension creates a textual representation of a web page similar to how the page would be read by a modern screen reader. I find this useful for doing some cursory validation of accessibility.
  • Firebug
    Firebug integrates with Firefox to put a wealth of development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page.
  • FireFTP
    FireFTP is a free, secure, cross-platform FTP client for Mozilla Firefox which provides easy and intuitive access to FTP servers. Along with transferring your files quickly and efficiently, FireFTP also includes more advanced features such as: directory comparison, syncing directories while navigating, SSL encryption, file hashing, and much more!
  • GreaseMonkey
    Allows you to customize the way a webpage displays using small bits of JavaScript. It’s also a cool way of adding functionality to web pages based on come criteria. To get an idea of what you can do with Greasemonkey have a loog at this little demonstration by one of my colleagues Richard Wallis who creates a greasemonkey script that uses our platform API’s to add a Libraries panel to amazon pages.
  • IE Tab
    This is an awesome add on. It basically embeds Internet Explorer into Mozilla/Firefox tabs. This is a great tool for web developers, since you can easily see how your webpage displayed in IE with just one click and then switch back to Firefox.
  • JSView
    All browsers include a “View Source” option, but none of them offer the ability to view the source code of external files. Most websites store their javascripts and style sheets in external files and then link to them within a web page’s source code. Previously if you wanted to view the source code of an external javascript/css you would have to manually look through the source code to find the url and then type that into your browser. With this nifty little plugin you can download/view all the externally referenced resources.
  • Live HTTP Headers
    This allows you to view the Http headers of a page while browsing. Comes in very useful when you want to check or validate the response headers that your application is sending out.
  • Quick Locale Switcher
    Allows you to quickly change and apply a different locale from the tools menu. Its very useful if your doing anything with multi lingual support.
  • RDF Viewer
    We’re doing a lot of work building applications that use RDF. This tool can be quite useful when inspecting RDF Documents.
  • View Formatted Source
    When you use the normal view page source option in firefox, it will render the source exactly how the server outputted it. The problem with this is often contain snippets of code generated by different components and the source is actually very difficult to read through. This little add-on formats the source and provides syntax highlighting and some neat code folding as well as optional CSS support.
  • Web Developer
    A Must-have add-on, allows you to perform standards based validation on pages, tinker with browser setting, the list if far too long to mention. If your doing any form of web development this should be a requisite!
  • XPather
    Quite a feature rich XPath generator, editor, inspector and extractor. I sometimes struggle with figuring out the xpath to a particular element or node in a file, this makes it very easy! It has a few quirks though.
  • Yellow Pipe Lynx Viewer
    RightLynx displays a Lynx view of a web page via right-click or Tool menu. With just one click and without leaving the page (RightLynx opens in a new, small window), preview the page you are on with a Lynx Viewer. You can see what what a page will look like when viewed with Lynx, a text-mode web browser. It is also presumably, how search engines see your site. In addition, it can help determine if web pages are accessible to the vision impaired.

Google Tech Talk: Mashups – Combining Web Applications to Make Desktop Productivity Tools

An interesting tech talk by Mark Birbeck CEO of X-Port. Some of my colleagues saw Mark talk at the mashup* event in London last month. Although I have reservations about the Sidewinder framework he proposes, we at Talis are doing more and more work to make API’s available to developers that can be called from desktop based applications like the widgets and gadgets Mark describes. The problem with Sidewinder though is it provides a wrapper around web based applications allowing you to run them on your desktop – and im struggling to see the value in that.

I’ve done a fair bit of work creating Yahoo (Konfabulator) Desktop Widgets that use our API’s which will be released in the new few months, as exemplars of how developers out there can mashup our API’s with other services to create interesting and even compelling new applications. We like mashups at Talis, in fact we held a very successful mashup competition last year, which we are running again this year so if your interested why not enter the competition.

Project Cenote

One of the projects on I worked on at Talis before christmas was our Project Cenote exemplar.


click to visit

Put simply Cenote allows you to search our platform for information on books and stuff. It’s a bit of a mashup since it seamlessly integrates our data and content with data and content from Amazon, and some other partners.

Cenote was created to serve as an example of how simple it is to create applications using the new Talis Platform. Rob and I worked on it for a few days, although we spent a lot of that time developing the rather unique look it has. The last thing we wanted it to look like was a traditional, stoic, boring OPAC. Amazing what you can do with a little CSS and some imagination, huh?

Looks aside what was really cool about Cenote was that Rob and I were able to build the application in very little time, we paired up on it, which meant we sat at one desk and pretty much coded it from scratch together over a couple of days or so. It’s actually a pretty thin skin built upon the platform.

Cenote is basically a small PHP5 application sitting inside Apache 2. The application makes web service calls to the Platform which returns data in RSS format. The application then uses a couple of XSLT stylesheets to transform the data into the UI you see.

You can find out more about Cenote over at the Talis Developer Network, we’ve decided to Open Source so it so that developers can see how simple it is build upon our platform services. It serves as one of several examples already published, and many more on the way. I’ve written an article on the TDN that explains briefly how to install the sources and get it running locally. So have a play 🙂

Talis Snowman …

Had to post this up, at lunch time some of us had a bit of a snowball fight whilst others decided to build a snowman, I should also point out that Karen in HR did give us a quick talk about health and safety 😉 … I have to admit I had a lot of fun over lunch, not just because it was hilarious watching Chris try to throw a snowball but seeing everyone turning into big kids was kind of nice …

Oh, I should point out that Any resemblance to real persons, living or dead is purely coincidental …