Combining, minimising and distributing JavaScripts

I’ve spent some time recently writing ant scripts to generate documentation, combine and minimise multiple javascript files into a single download. I thought I’d share what I have, in case others find it useful or can suggest better ways of doing what I’m trying to accomplish.

Combining multiple JS files into a single file

Here’s a simple ant task that concatenates several files into a single file. The version.txt is a file that simple contains a version number in it i.e. ‘0.5’.

  1. <target name="combine">
  2.   <echo message="Concatenating Files" />
  3.   <concat destfile="./dist/uncompressed/mydistribution-${VERSION}.js">
  4.     <fileset dir="." includes="file1.js" />
  5.     <fileset dir="." includes="file2.js" />
  6.     <fileset dir="." includes="file3.js" />
  7.     <fileset dir="." includes="file4.js" />
  8.     <fileset dir="." includes="file5.js" />
  9.   </concat>
  10. </target>

Minimising using YUI Compressor

You’ll need to download the latest version of the YUI Compressor. All I’ve provided is a simple ant wrapper around it, and example of how to use it:

  1. <property name="LIB_DIR" value="./lib"/>
  2. <property name="YUI" value="${LIB_DIR}/yui-compressor/yuicompressor-2.4.2.jar" />
  3. <target name="minimiseJSFile">
  4.   <java jar="${YUI}" fork="true" failonerror="true">
  5.     <arg line="–type js" />
  6.     <arg line="-o ${outputFile}" />
  7.     <arg value="${inputFile}" />
  8.   </java>
  9. </target>
  10. <!– using the above –>
  11. <target name="minimise">
  12.   <antcall target="minimiseJSFile">
  13.     <param name="inputFile" value="./dist/uncompressed/mydistribution-${VERSION}.js" />
  14.     <param name="outputFile" value="./dist/minimised/mydistribution.min-${VERSION}.js" />
  15.    </antcall>
  16. </target>

It’s worth noting that by default the YUI Compressor both minimises and obfuscates code, this is because the process of obfuscation also significantly reduces the size of the script since it substitutes your nice variable names with single letter variables. If you do not want this behaviour then you can add the ‘–nomunge’ directive as an arg line above .

Generating JS Documention

For this to work you’ll need to download the latest version of JSDOC Toolkit. In the example below im enumerating each file I want documentation generated for, you could just as easily point it at a directory.

  1. <target name="doc" description="generates documentation for core rdfQuery">
  2.   <!– jsdoc-toolkit ant taks is currently broken, so we directly run –>
  3.   <echo message="Generating Documentation:"/>
  4.   <java jar="${JSDOC_TOOLKIT_DIR}/jsrun.jar" fork="true" failonerror="true">
  5.     <arg value="${JSDOC_TOOLKIT_DIR}/app/run.js"/>
  6.     <arg value="-t=${JSDOC_TOOLKIT_DIR}/templates/jsdoc"/>
  7.     <arg value="-d=./dist/documentation/"/>
  8.     <arg value="file1.js"/>
  9.     <arg value="file2.js"/>
  10.     <arg value="file3.js"/>
  11.   </java>
  12. </target>

Packaging a distribution

Here we want to simply create a single, easily downloadable zip file which contains the combined javascripts, a minimised version of this, and all the api documentation.

  2.   <target name="dist">      
  3.     <zip destfile="./dist/mydistribution-${VERSION}.zip">
  4.       <zipfileset dir="./dist/uncompressed/" includes="*.js" prefix="./dist/uncompressed/"/>
  5.       <zipfileset dir="./dist/minimised/" includes="*.js" prefix="./dist/minimised/"/>
  6.       <zipfileset dir="./dist/documentation/" includes="**/**" prefix="./dist/documentation/"/>
  7.     </zip>
  8.  </target>

Putting it altogether

Here’s a real example of how you can combine the above together. I’ve copied the build.xml that I added to the rdfQuery project below.:

  1. <?xml version="1.0"?>
  2. <project name="rdfquery" basedir="." default="all">    
  3.   <loadfile property="VERSION" srcfile="version.txt" description="Version to build" >
  4.     <filterchain>
  5.       <striplinebreaks/>
  6.     </filterchain>
  7.   </loadfile>
  9.   <property name="DOCS_DIR" value="./docs" description="API documentation"/>
  10.   <property name="DIST_DIR" value="./dist"/>
  11.   <property name="LIB_DIR" value="./lib"/>
  12.   <property name="JSDOC_TOOLKIT_DIR" value="${LIB_DIR}/jsdoc-toolkit/"/>
  13.   <property name="YUI" value="${LIB_DIR}/yui-compressor/yuicompressor-2.4.2.jar" />
  14.   <!– Names for output –>
  15.   <property name="JS" value="${DIST_DIR}/js/" />
  16.   <property name="JS_MIN" value="${DIST_DIR}/minimised/" />
  18.   <target name="all" depends="init, doc, dist"/>
  20.   <target name="doc" description="generates documentation for core rdfQuery">
  21.     <!– jsdoc-toolkit ant taks is currently broken, so we directly run –>
  22.     <echo message="Generating Documentation:"/>
  23.     <java jar="${JSDOC_TOOLKIT_DIR}/jsrun.jar" fork="true" failonerror="true">
  24.       <arg value="${JSDOC_TOOLKIT_DIR}/app/run.js"/>
  25.       <arg value="-t=${JSDOC_TOOLKIT_DIR}/templates/jsdoc"/>
  26.       <arg value="-d=${DOCS_DIR}"/>
  27.       <arg value="jquery.uri.js"/>
  28.       <arg value="jquery.xmlns.js"/>
  29.       <arg value="jquery.datatype.js"/>
  30.       <arg value="jquery.curie.js"/>
  31.       <arg value="jquery.rdf.js"/>
  32.       <arg value="jquery.rdfa.js"/>
  33.       <arg value="jquery.rules.js"/>
  34.     </java>
  35.   </target>
  37.   <target name="dist">
  38.     <antcall target="combine" />
  39.     <antcall target="minimise" />
  40.      <zip destfile="${DIST_DIR}/jquery.rdfquery-${VERSION}.zip">
  41.        <zipfileset dir="${JS}" includes="*.js" prefix="${JS}"/>
  42.        <zipfileset dir="${JS_MIN}" includes="*.js" prefix="${JS_MIN}"/>
  43.        <zipfileset dir="${DOCS_DIR}" includes="**/**" prefix="${DOCS_DIR}"/>
  44.      </zip>
  45.   </target>
  47.   <target name="combine" description="combines js files into three different files representing the three different packages for distribution">
  48.     <echo message="Building rdfQuery Core Distribution" />
  49.     <concat destfile="${JS}/jquery.rdfquery.core-${VERSION}.js">
  50.       <fileset dir="." includes="jquery.uri.js" />
  51.       <fileset dir="." includes="jquery.xmlns.js" />
  52.       <fileset dir="." includes="jquery.datatype.js" />
  53.       <fileset dir="." includes="jquery.curie.js" />
  54.       <fileset dir="." includes="jquery.rdf.js" />
  55.     </concat>
  57.     <echo message="Building rdfQuery RDFa Distribution" />
  58.     <concat destfile="${JS}/jquery.rdfquery.rdfa-${VERSION}.js">
  59.       <fileset dir="${JS}/" includes="jquery.rdfquery.core-${VERSION}.js" />
  60.       <fileset dir="." includes="jquery.rdfa.js" />            
  61.     </concat>
  63.     <echo message="Building rdfQuery Rules Distribution" />
  64.     <concat destfile="${JS}/jquery.rdfquery.rules-${VERSION}.js">
  65.       <fileset dir="${JS}/" includes="jquery.rdfquery.rdfa-${VERSION}.js" />
  66.       <fileset dir="." includes="jquery.rules.js" />            
  67.     </concat>
  68.   </target>
  70.   <target name="minimise">
  71.     <echo message="Minimising rdfQuery Core Distribution" />
  72.     <echo message="Minimising rdfQuery RDFa Distribution" />
  73.     <echo message="Minimising rdfQuery Rules Distribution" />
  75.     <antcall target="minimiseJSFile">
  76.       <param name="inputFile" value="${JS}/jquery.rdfquery.core-${VERSION}.js" />
  77.       <param name="outputFile" value="${JS_MIN}/jquery.rdfquery.core.min-${VERSION}.js" />
  78.     </antcall>        
  79.     <antcall target="minimiseJSFile">
  80.       <param name="inputFile" value="${JS}/jquery.rdfquery.rdfa-${VERSION}.js" />
  81.       <param name="outputFile" value="${JS_MIN}/jquery.rdfquery.rdfa.min-${VERSION}.js" />
  82.     </antcall>
  83.     <antcall target="minimiseJSFile">
  84.       <param name="inputFile" value="${JS}/jquery.rdfquery.rules-${VERSION}.js" />
  85.       <param name="outputFile" value="${JS_MIN}/jquery.rdfquery.rules.min-${VERSION}.js" />
  86.     </antcall>
  87.   </target>
  89.   <target name="minimiseJSFile">
  90.     <java jar="${YUI}" fork="true" failonerror="true">
  91.       <arg line="–type js" />
  92.       <arg line="-o ${outputFile}" />
  93.       <arg value="${inputFile}" />
  94.     </java>
  95.   </target>
  97.   <target name="clean" description="">
  98.     <echo message="Deleting distribution and API documentation"/>
  99.     <delete dir="${DIST_DIR}"/>
  100.     <delete dir="${DOCS_DIR}"/>
  101.   </target>
  103.   <target name="init" depends="clean">
  104.     <mkdir dir="${DIST_DIR}" />
  105.     <mkdir dir="${DIST_DIR}/js" />
  106.     <mkdir dir="${DIST_DIR}/minimised" />
  107.     <mkdir dir="${DOCS_DIR}" />
  108.   </target>
  109. </project>


I hope others find this useful. There are a number of obivious improvements that can be made but I hope it serves to illustrate the general principles. Let me know you all think

Lifes too short – write fast code!


This is the second talk that follows-up on the 14 best practices from YSlow and “High Performance Web Sites”. The first talk presented three new best practices: Split the Initial Payload, Load Scripts Without Blocking, and Don’t Scatter Inline Scripts.

The most important of these is loading external scripts without blocking other downloads and preventing page rendering. One complication is this may introduce undefined symbol errors if inlined code uses symbols from the external scripts. Luckily, there are several techniques to workaround this problem. That and other topics will be covered in this presentation of three more best practices:

* Coupling Asynchronous Scripts
* Use Iframes Sparingly
* Flush the Document Early

Much of this talk discusses material from Steve’s book, High Performance Websites: Essential Knowledge for Front-End Engineers. The talk is full of great advice, I found the discussion around loading scripts both synchronously and asynchronously and the performance gains that can be achieved. However this has to be combing with understanding that you also have to couple scripts together in order to preserver the order they are loaded in, as well as understanding that by default loading external scripts blocks download of other elements on the page. Steve discusses a number of techniques that can address these issues as well as the pros and cons associated with each. His discussion around John Resigs idea of using degrading script tags is extremely useful.

This is a hugely useful tech talk and a must for anyone doing serious Javascript development.

Drop-in JavaScript Performance

Browsers are continually upgrading – providing new features from the latest specifications. We’ll look at modern JavaScript and DOM techniques that you can easily drop in to your applications for instant speed-ups.

This is a great tech talk by John Resig that covers features in upcoming browsers such as new javascript engines in the major browsers, audio and video tagging in FF3.1 and Opera 10, total ACIDIII Compliance in Safari 4 as well as desktop integration; dramatically better performance in IE8 over IE7 etc.

We’ve started doing a lot of RIA work using JavaScript at Talis so understanding how to write cross browser, performant scripts its a must. This talk is great food for thought for anyone who wants to understand modern browser architecture and benefit from some great tips.

Test-Driven JavaScript Development with JsUnit

The last time I used JsUnit was when I first joined Talis. At the time my colleague Ian Davis asked me to write a JavaScript client library for one of our platform API’s to make it easy for developers to perform bibliographic searches. It wasn’t a particularly difficult task and I did it relatively easily. It was around the same time that Rob was extolling the virtues of Test Driven Development to me, and to try to prove his point we agreed to do an experiment: he asked me to set aside the library I had written and to see if I could develop the library again using test driven development. It meant I had to figure out how to unit test JavaScript, and thats when I found JsUnit. I did the exercise again and even I was impressed with the results. By having to think about the tests first, and design the interface to the library as I wrote each test it evolved very differently to my original solution. Consequently it was also far superior.

Anyway fast forward two and half years and I find myself in a similar situation. We have only just begun to start writing bits of JavaScript code based around prototype.js to help us create richer user experiences in our products if we detect that JavaScript is enabled in the browser. This now means I want to ensure that we are using the same rigour when writing these bits of code as we do in all other parts of the application – just because its JavaScript and executed inside the browser this doesn’t mean it shouldn’t be tested.

I’ve just spent the morning getting JsUnit installed and figuring out how to get it to run as part of a continuous integration process, as well as thinking about how to write tests for some slightly different scenarios. Here’s what I’ve discovered today:

Installing JsUnit

Couldn’t be easier … go to and download the latest distribution, and extract into a folder on your system somewhere, lets say
/jsunit for now. The distribution contains both the standard test runner as well as jsunit server which you will need it if you want to hook it into an ant build.

Writing Tests

In JsUnit we place our tests in a HTML Test Page which is the equivalent of a Test Class, this test page must have a script reference to the jsUnitCore.js so the test runner knows its a test. So lets work through a simple example. Let’s say we want to write a function that returns the result of adding two parameters together. The Test Page for this might look like this:

  2. <html>
  3.  <head>
  4.   <title>Test Page for add(value1, value2)</title>
  5.   <script language="javascript" src="/jsunit/app/jsUnitCore.js"></script>
  6.   <script language="javascript" src="scripts/addValues.js"></script>
  7.  </head>
  8.  <body>
  9.     <script language="javascript">
  10.     function testAddWithTwoValidArguments() {
  11.         assertEquals("2 add 3 is 5", 5, add(2,3) );
  12.     }
  13.   </script>
  14.  </body>
  15. </html>

For now lets save this file to /my-jsunit-tests/addTest.html

To run the test you need to point your browser at the following local url:


The test will not run since we haven’t defined the add function. Let’s do that (very crudely):


Now if you go to that URL it will run the test and report that it passed. Excellent, we’ve written a simple test in JavaScript. Now lets extend this a little, lets say I want to write something more complicated like a piece of JavaScript that uses Prototype.js to update the DOM of a page. Is this possible? Can I do that test first? It turns out that you can …

Lets say we have a div on the page called ‘tableOfContents’ and we want to use Prototype.js to dynamically inject a link onto the page that says [show] and lets say we want to write a function that will toggle this link to say [hide] when the user clicks on it, this link will also set the visible state of the table of contents itself which for now we’ll say is just an ordered list (OL). Our test page is going to be slightly more complex …

  2. <html>
  3.  <head>
  4.   <title>Test Page for multiplyAndAddFive(value1, value2)</title>
  5.   <script language="javascript" src="/jsunit/app/jsUnitCore.js"></script>
  6.   <script language="javascript" src="scripts/prototype/prototype-"></script>
  7.   <script language="javascript" src="scripts/tableOfContents.js"></script>
  8.  </head>
  9.  <body>
  10.     <div id="tableOfContents">
  11.     <h2 id="tableOfContentsHeader">Table of contents</h2>
  12.     <ol id="list-toc">
  13.     </ol>
  14.     </div>    
  15.     <script language="javascript">
  16.     function testTOC()
  17.     {
  18.         var title = $(‘lnkToggleTOC’).title;
  19.         assertEquals("should be Show the table of contents", "Show the table of contents", title);
  21.         toggleTableOfContents();
  23.         var title = $(‘lnkToggleTOC’).title;
  24.         assertEquals("should be Hide the table of contents", "Hide the table of contents", title);
  26.     }
  27.   </script>
  28.  </body>
  29. </html>

There are some differences in this test. Firstly the html contains some markup, that I’m using as the containers for my table of contents. The table of contents has a header and the contents in the form of an empty ordered list. Now I know that I want the javascript to execute when the page is loaded, so I’ve written this test to assume that the script will run and will inject and element called ‘linkToggleToc’ which is the show/hide link next to the heading. Therefore the first line of the test uses prototype.js element selector notation to set a local variable called title to the value of the title of the element that has the id ‘linkToggleToc’. If the script failes to execute then this element will not be present and the subsequent assert will fail. If the assert succeeds, then we call the toggleTableOfContents function and then repeat the same evaluation only now we are checking to see if the link has been changed.

The code for tableOfContents.js is as follows:

  1. span class=”st0″>’load’‘Show the table of contents’‘Hide the table of contents’‘list-toc’).hide();
  2.     $(‘tableOfContentsHeader’‘inline’‘a’, { ‘id’: ‘lnkToggleTOC’, ‘href’: ‘javascript:toggleTableOfContents()’, ‘title’: titleShowTOC, ‘class’: ‘ctr’ }).update("[show]");
  4.     $(‘tableOfContentsHeader’‘after’‘list-toc’‘lnkToggleTOC’).update(‘[show]’);
  5.         $(‘lnkToggleTOC’‘lnkToggleTOC’).update(‘[hide]’);
  6.         $(‘lnkToggleTOC’).title = titleHideTOC;
  7.     }
  8. }

Now if we run this test in the same way we executed the previous test it will pass. I accept that this example is a bit contrived since I know it already works and I’ve skimmed over some of the details around it. The point I’m trying to make though is that you can write unit tests for pretty much any kind of JavaScript you need to write, even tests for scripts that do dom manipulation, or make AjaxRequests etc.

Setting up the JsUnit server so you can run it in a build

JsUnit ships with its own ant build file that requires some additional configuration before you can run the server. The top of the build file contains a number of properties that need to be set, here’s what you set them to ( using the paths that I’ve been using in the above example)

  2. <project name="JsUnit" default="create_distribution" basedir=".">
  4.   <property
  5.     name="browserFileNames"
  6.     value="/usr/bin/firefox-2" />
  8.   <property
  9.     id="closeBrowsersAfterTestRuns"
  10.     name="closeBrowsersAfterTestRuns"
  11.     value="false" />
  13.   <property
  14.     id="ignoreUnresponsiveRemoteMachines"
  15.     name="ignoreUnresponsiveRemoteMachines"
  16.     value="true" />
  18.   <property
  19.     id="logsDirectory"
  20.     name="logsDirectory"
  21.     value="/my-jsunit-tests/results/" />
  23.   <property
  24.     id="port"
  25.     name="port"
  26.     value="9001"  />
  28.   <property
  29.     id="remoteMachineURLs"
  30.     name="remoteMachineURLs"
  31.     value="" />
  33.   <property
  34.     id="timeoutSeconds"
  35.     name="timeoutSeconds"
  36.     value="60" />
  38.   <property
  39.     id="url"
  40.     name="url"
  41.     value="file:///jsunit/testRunner.html?testPage=/my-jsunit-tests/tocTest.html" />
  42. </project>

You can then type the following command in the root of the jsunit distribution to launch the jsunit server, executes the test, and outputs a test results log file, formatted just like JUnit, and reports that the build was either successful or not if the test fails.

  ant standalone_test

Remember that in this example I’ve used a simple Test Page, however JsUnit, like any XUnit framework allows you to specify Test Suites, which is how you would run multiple Test Pages. Also the parameters in the build file woudn’t be hardcoded in you continuous integration process but would rather be passed in, and you would want to call it from your projects main ant build file … all of which is pretty simple to configure, once you know what is you want to do and what’s possible.


Processing is a Open Source data visualization programming language. I first played around with it about a year ago. I was recently reminded of it by Rob, and have started playing with it again. However, I just discovered that earlier in the week John Resig released his JavaScript Port, Processing.js. So far it looks amazing, virtually all the demo/example applications that are shipped with Processing are running using the CanvasElement in JavaScript. I’m going to have a lot of fun with this.

John deserves a huge amount of credit for this contribution.

Best practises in JavaScript library design

This is a very useful tech talk by John Resig, that explores a number of techniques used to build robust, reusable cross-platform JavaScript libraries.

John offers some excellent advise and whilst some if it might seem obvious it’s worrying how many existing API’s fall into some of the common pitfalls he describes.

John argues that part of writing good solid API’s is to keep the code orthogonal by ensuring that whenever you perform an action on an object that action should be consistent across all objects. In other words each object should expose the same methods, i.e. add(), remove(), all(), … etc. this creates familiarity and developers using the API know that different objects that are responsible for different things can all be used consistently.

John also makes the obvious and profound point that when creating an API you should Fear adding methods, the reason being that every method that you write is one that you will have to maintain. In fact you should try to embrace the idea that removing unused code is a good thing. It reduces the size of your API, makes it easier to learn and easier to maintain.

Going back to consistency its imperative that we use good naming conventions and naming schemes and stick with them, this also means you have to be very diligent about argument position in method calls … I know how frustrating it is when you use some of the string processing methods in PHP but the argument order changes it’s annoying!

John goes onto offer much more advice on encapsulation, functional programming, compression of libraries using Dojo. He advocates Test Driven Development for API design which generally results in better API design.

It’s an excellent talk and well worth watching for anyone who working on building JavaScript, or indeed any kind of API.

JavaScript can provide a richer user experience without compromising Accessibility.

When Rob and I originally put Cenote together one of the constraints we had to work within was that the application must work without relying on JavaScript. This is primarily because relying heavily on JavaScript can introduce some accessibility issues, these issues include:

  • Making Navigation difficult: Site’s that rely on JavaScript can make it very difficult to navigate around a site using a keyboard or other assistive technologies, and impossible if JavaScript is disabled in the browser.
  • Hides Content: Sites that rely heavily on JavaScript can present content and functionality to the user that is not accessible to assistive technologies, such as Text Readers.
  • Removes User Control: User’s have no control over automated content changes. For user’s have no accessibility needs this isn’t a problem, but for users who rely on assisstive technology there is no way of knowing that content has changed. This can prove to be very confusing/disorienting since this can also involve altering normal browser functionality and triggering events that the user is not aware of.

This doesn’t necessarily mean that pages that contain JavaScript cannot be fully accessible, such pages can be accessible if the functionality is device independent ( in other words it does not require only a mouse or only a keyboard to work ) and the content is always available to assistive technologies.

So based on this knowledge Rob and I made the conscious decision that what we would do was develop Cenote first without using any JavaScript at all. This would ensure that the content rendered on the pages was always available to assistive technologies. We agreed that we could then use JavaScript to enhance the user experience by using JavaScript to manipulate the DOM of the page in the browser.

To see this in action visit this page in Cenote, if you scroll to the bottom of the page you’ll notice a list of libraries that you navigate through using a paging control (pictured). You’ll also notice that the book jacket in the top left hand corner of the page that has a nice reflection effect. Both these effects are made possible through JavaScript, and provide a slightly richer user experience. To see what I mean try turning off JavaScript in your browser1, and hit refresh. The page now displays the book jacket without the reflection and the complete list of libraries without the paging control.

So how is all this achieved? Well lets take a closer look at how the paging control. The snippet of html below shows a simplified2 version of markup that actually represents the items in the list3, take particulare note of the empty div after the unordered list this is the placeholder we inject the paging control into:

  2. <ul id="Libraries">
  3.   <li><a class="item-library" href="#">Aberdeenshire Libraries</a></li>
  4.   <li><a class="item-library" href="#">Barnsley MD Library</a></li>
  5.   <li><a class="item-library" href="#">Blackburn</a></li>
  6. </ul>
  7. <div id="pagingcontrol"></div>

Now when you view the page without JavaScript enabled the browser renders this markup exactly as it is. Accessibility or Assitive technology only use the HTML markup on the page. In the case of Text Readers this HTML is parsed and the software reads the content on the page back to the user. Now when JavaScript is enabled the markup on the page does not change, it stays exactly the same. We use JavaScript to manipulate the browsers DOM which is the model the browser generates internally to represent the page. Now in the example above the Unordered List is identified using the id “Libraries” which makes it easy to find in the DOM as this snippet of code illustrates:

  1. span class=”st0″>’Libraries’‘LI’

Here we use its identifier to find the unordered list we want to augment with our paging control. Each list element in the unordered list is what we refer to as a holding, we simply extract the contents of each of those nodes and temporarily store them in an array called libraries. In the case of the page I pointed you to earlier this array would contain 50 items.

Once we have found and copied the original complete list of libraries we can now proceed with updating the DOM to display to the user only the first 10 libraries, and render a paging navigation control. The function below updates the DOM by removing the original contents of the Libraries node, and replaces it with 10 items indexed from the specified startPosition.

  1. span class=”st0″>"Libraries"‘<UL>’‘<LI>’ + libraries[x] + ‘</LI>’‘<LI> </LI>’‘</UL>’;
  2.         renderNavigationControls();
  3.     }
  4. }

The final line in the function above calls a method to render the navigation controls which I’ve copied below. You’ll notice that we find the placeholder identified by “pagingcontrol”, and render the preview, next and individual page links ( renderPageNumbers() ) into it.

  1. span class=”st0″>’pagingcontrol’‘<a href=\”javascript:printPreviousPage();\” alt=\”Previous Page\” title=\”Previous Page\”>< previous</a>’‘< previous’‘  ‘ + renderPageNumbers() + ‘  ‘‘<a href=\”javascript:printNextPage();\” alt=\”Next Page\” title=\”Next Page\”>next ></a>’‘next >’;
  2.         }
  3.     }
  4. }

To reiterate, because this technique manipulates the DOM, the original markup isnt changed at all. Project Cenote was recently open sourced you can download the entire source base, which includes the javascript file, pagingcontrol.js, from which I took the extracts above, and get a feeling for how it works and experiment with it.

Now the above example might not seem all that impressive, its just a little paging control, right? Well, its a simple example that illustrates the technique which can be very powerful. Consider the following images which show a Calendar that view that was created by one of my colleagues, Andrew, which uses an unordered list to represent a series of events, and that’s what you see when JavaScript is disabled:

However with JavaScript enabled the DOM of the page is rewritten and combined with some CSS it looks like this:

Now thats more impressive, right? I hope this serves to illustrate how you can use JavaScript to provide richer user experiences without compromising accessibility. It’s an area I’ve been thinking about a lot recently and there are wider issues around how to develop better JavaScript libraries that understand how to expose dynamic content that needs to be accessible and whether this needs to be considered a best practise that more developers need to adopt. These issues beginning to have wider implications as my friend Alan put it earlier this week:

it will address issues of how traditianal HCi changes in the face of web2.0: when people are doing mash-ups (so there are more ‘artisan’ interface constructors), or end users configuring their own spaces (as in MySpace), or how you build ‘good practice’ into javascript libraries like prototoype or yahoo! YUI

.. when he contacted Justin and me earlier this week and invited us along with others from Talis to attend this years 2007 – British HCi Conference taking place at Lancaster University. Recognising how we to develop best practises for the development of truly accessible Javascript libraries is one of the issues I’ll be raising to the panel when I attend the conference.

  1. In Firefox you can achieve this by going to Tools -> Options -> Content and uncheck the box labelled JavaScript. However if you don’t already use it I recommend installing the Web Developer extensions for Firefox [back]
  2. I’ve removed the url and the title attribute and the icon image from each link to simplify the example [back]
  3. I’m using Dean’s Syntax Highlighter but specifying html means that the Snap Shots tries to execute against it, so I’ve specified XML instead [back]