Philip Zimbardo prescribes a healthy take on time

Psychologist Philip Zimbardo says happiness and success are rooted in a trait most of us disregard: the way we orient toward the past, present and future. He suggests we calibrate our outlook on time as a first step to improving our lives

Interestingly enough having recently re-read George Lakoff’s Metaphors We Live By , Zimbardo’s perspective seem’s to make a lot sense. Lakoff’s book also opens with a lengthy discussion of how the ways we talk about time influence the decisions that we make: time is money, time is a resource, time is moving, etc. he also goes on to discuss how much our mindset, which is shaped by culture, affects our decisions. Not entirely sure how comfortable I am with Zimbardo’s thesis on the optimal temporal mix, although at first glance it seems to make perfect sense:

So, very quickly, what is the optimal time profile? High on past-positive. Moderately high on future. And moderate on present-hedonism. And always low on past-negative and present-fatalism. So the optimal temporal mix is what you get from the past — past-positive give you roots. You connect your family, identity and your self. What you get from the future is wings to soar to new destinations, new challenges. What you get from the present hedonism is the energy, the energy to explore yourself, places, people, sensuality.
Any time perspective in excess has more negatives than positives. What do futures sacrifice for success? They sacrifice family time. They sacrifice friend time. They sacrifice fun time. They sacrifice personal indulgence. They sacrifice hobbies. And they sacrifice sleep. So it affects their health. And they live for work, achievement and control. I’m sure that resonates with some of the TEDsters.

Zimbardo seemed to be rushing along very fast which is probably why its taking time to fully appreciate his ideas, yet there is something that resonates deeply within me. What do others think?

Combining, minimising and distributing JavaScripts

I’ve spent some time recently writing ant scripts to generate documentation, combine and minimise multiple javascript files into a single download. I thought I’d share what I have, in case others find it useful or can suggest better ways of doing what I’m trying to accomplish.

Combining multiple JS files into a single file

Here’s a simple ant task that concatenates several files into a single file. The version.txt is a file that simple contains a version number in it i.e. ‘0.5’.

  1. <target name="combine">
  2.   <echo message="Concatenating Files" />
  3.   <concat destfile="./dist/uncompressed/mydistribution-${VERSION}.js">
  4.     <fileset dir="." includes="file1.js" />
  5.     <fileset dir="." includes="file2.js" />
  6.     <fileset dir="." includes="file3.js" />
  7.     <fileset dir="." includes="file4.js" />
  8.     <fileset dir="." includes="file5.js" />
  9.   </concat>
  10. </target>

Minimising using YUI Compressor

You’ll need to download the latest version of the YUI Compressor. All I’ve provided is a simple ant wrapper around it, and example of how to use it:

  1. <property name="LIB_DIR" value="./lib"/>
  2. <property name="YUI" value="${LIB_DIR}/yui-compressor/yuicompressor-2.4.2.jar" />
  3. <target name="minimiseJSFile">
  4.   <java jar="${YUI}" fork="true" failonerror="true">
  5.     <arg line="–type js" />
  6.     <arg line="-o ${outputFile}" />
  7.     <arg value="${inputFile}" />
  8.   </java>
  9. </target>
  10. <!– using the above –>
  11. <target name="minimise">
  12.   <antcall target="minimiseJSFile">
  13.     <param name="inputFile" value="./dist/uncompressed/mydistribution-${VERSION}.js" />
  14.     <param name="outputFile" value="./dist/minimised/mydistribution.min-${VERSION}.js" />
  15.    </antcall>
  16. </target>
  17.  

It’s worth noting that by default the YUI Compressor both minimises and obfuscates code, this is because the process of obfuscation also significantly reduces the size of the script since it substitutes your nice variable names with single letter variables. If you do not want this behaviour then you can add the ‘–nomunge’ directive as an arg line above .

Generating JS Documention

For this to work you’ll need to download the latest version of JSDOC Toolkit. In the example below im enumerating each file I want documentation generated for, you could just as easily point it at a directory.

  1. <target name="doc" description="generates documentation for core rdfQuery">
  2.   <!– jsdoc-toolkit ant taks is currently broken, so we directly run –>
  3.   <echo message="Generating Documentation:"/>
  4.   <java jar="${JSDOC_TOOLKIT_DIR}/jsrun.jar" fork="true" failonerror="true">
  5.     <arg value="${JSDOC_TOOLKIT_DIR}/app/run.js"/>
  6.     <arg value="-t=${JSDOC_TOOLKIT_DIR}/templates/jsdoc"/>
  7.     <arg value="-d=./dist/documentation/"/>
  8.     <arg value="file1.js"/>
  9.     <arg value="file2.js"/>
  10.     <arg value="file3.js"/>
  11.   </java>
  12. </target>
  13.  

Packaging a distribution

Here we want to simply create a single, easily downloadable zip file which contains the combined javascripts, a minimised version of this, and all the api documentation.

  1.    
  2.   <target name="dist">      
  3.     <zip destfile="./dist/mydistribution-${VERSION}.zip">
  4.       <zipfileset dir="./dist/uncompressed/" includes="*.js" prefix="./dist/uncompressed/"/>
  5.       <zipfileset dir="./dist/minimised/" includes="*.js" prefix="./dist/minimised/"/>
  6.       <zipfileset dir="./dist/documentation/" includes="**/**" prefix="./dist/documentation/"/>
  7.     </zip>
  8.  </target>
  9.  

Putting it altogether

Here’s a real example of how you can combine the above together. I’ve copied the build.xml that I added to the rdfQuery project below.:

  1. <?xml version="1.0"?>
  2. <project name="rdfquery" basedir="." default="all">    
  3.   <loadfile property="VERSION" srcfile="version.txt" description="Version to build" >
  4.     <filterchain>
  5.       <striplinebreaks/>
  6.     </filterchain>
  7.   </loadfile>
  8.  
  9.   <property name="DOCS_DIR" value="./docs" description="API documentation"/>
  10.   <property name="DIST_DIR" value="./dist"/>
  11.   <property name="LIB_DIR" value="./lib"/>
  12.   <property name="JSDOC_TOOLKIT_DIR" value="${LIB_DIR}/jsdoc-toolkit/"/>
  13.   <property name="YUI" value="${LIB_DIR}/yui-compressor/yuicompressor-2.4.2.jar" />
  14.   <!– Names for output –>
  15.   <property name="JS" value="${DIST_DIR}/js/" />
  16.   <property name="JS_MIN" value="${DIST_DIR}/minimised/" />
  17.    
  18.   <target name="all" depends="init, doc, dist"/>
  19.    
  20.   <target name="doc" description="generates documentation for core rdfQuery">
  21.     <!– jsdoc-toolkit ant taks is currently broken, so we directly run –>
  22.     <echo message="Generating Documentation:"/>
  23.     <java jar="${JSDOC_TOOLKIT_DIR}/jsrun.jar" fork="true" failonerror="true">
  24.       <arg value="${JSDOC_TOOLKIT_DIR}/app/run.js"/>
  25.       <arg value="-t=${JSDOC_TOOLKIT_DIR}/templates/jsdoc"/>
  26.       <arg value="-d=${DOCS_DIR}"/>
  27.       <arg value="jquery.uri.js"/>
  28.       <arg value="jquery.xmlns.js"/>
  29.       <arg value="jquery.datatype.js"/>
  30.       <arg value="jquery.curie.js"/>
  31.       <arg value="jquery.rdf.js"/>
  32.       <arg value="jquery.rdfa.js"/>
  33.       <arg value="jquery.rules.js"/>
  34.     </java>
  35.   </target>
  36.    
  37.   <target name="dist">
  38.     <antcall target="combine" />
  39.     <antcall target="minimise" />
  40.      <zip destfile="${DIST_DIR}/jquery.rdfquery-${VERSION}.zip">
  41.        <zipfileset dir="${JS}" includes="*.js" prefix="${JS}"/>
  42.        <zipfileset dir="${JS_MIN}" includes="*.js" prefix="${JS_MIN}"/>
  43.        <zipfileset dir="${DOCS_DIR}" includes="**/**" prefix="${DOCS_DIR}"/>
  44.      </zip>
  45.   </target>
  46.    
  47.   <target name="combine" description="combines js files into three different files representing the three different packages for distribution">
  48.     <echo message="Building rdfQuery Core Distribution" />
  49.     <concat destfile="${JS}/jquery.rdfquery.core-${VERSION}.js">
  50.       <fileset dir="." includes="jquery.uri.js" />
  51.       <fileset dir="." includes="jquery.xmlns.js" />
  52.       <fileset dir="." includes="jquery.datatype.js" />
  53.       <fileset dir="." includes="jquery.curie.js" />
  54.       <fileset dir="." includes="jquery.rdf.js" />
  55.     </concat>
  56.    
  57.     <echo message="Building rdfQuery RDFa Distribution" />
  58.     <concat destfile="${JS}/jquery.rdfquery.rdfa-${VERSION}.js">
  59.       <fileset dir="${JS}/" includes="jquery.rdfquery.core-${VERSION}.js" />
  60.       <fileset dir="." includes="jquery.rdfa.js" />            
  61.     </concat>
  62.        
  63.     <echo message="Building rdfQuery Rules Distribution" />
  64.     <concat destfile="${JS}/jquery.rdfquery.rules-${VERSION}.js">
  65.       <fileset dir="${JS}/" includes="jquery.rdfquery.rdfa-${VERSION}.js" />
  66.       <fileset dir="." includes="jquery.rules.js" />            
  67.     </concat>
  68.   </target>
  69.  
  70.   <target name="minimise">
  71.     <echo message="Minimising rdfQuery Core Distribution" />
  72.     <echo message="Minimising rdfQuery RDFa Distribution" />
  73.     <echo message="Minimising rdfQuery Rules Distribution" />
  74.  
  75.     <antcall target="minimiseJSFile">
  76.       <param name="inputFile" value="${JS}/jquery.rdfquery.core-${VERSION}.js" />
  77.       <param name="outputFile" value="${JS_MIN}/jquery.rdfquery.core.min-${VERSION}.js" />
  78.     </antcall>        
  79.     <antcall target="minimiseJSFile">
  80.       <param name="inputFile" value="${JS}/jquery.rdfquery.rdfa-${VERSION}.js" />
  81.       <param name="outputFile" value="${JS_MIN}/jquery.rdfquery.rdfa.min-${VERSION}.js" />
  82.     </antcall>
  83.     <antcall target="minimiseJSFile">
  84.       <param name="inputFile" value="${JS}/jquery.rdfquery.rules-${VERSION}.js" />
  85.       <param name="outputFile" value="${JS_MIN}/jquery.rdfquery.rules.min-${VERSION}.js" />
  86.     </antcall>
  87.   </target>
  88.  
  89.   <target name="minimiseJSFile">
  90.     <java jar="${YUI}" fork="true" failonerror="true">
  91.       <arg line="–type js" />
  92.       <arg line="-o ${outputFile}" />
  93.       <arg value="${inputFile}" />
  94.     </java>
  95.   </target>
  96.    
  97.   <target name="clean" description="">
  98.     <echo message="Deleting distribution and API documentation"/>
  99.     <delete dir="${DIST_DIR}"/>
  100.     <delete dir="${DOCS_DIR}"/>
  101.   </target>
  102.    
  103.   <target name="init" depends="clean">
  104.     <mkdir dir="${DIST_DIR}" />
  105.     <mkdir dir="${DIST_DIR}/js" />
  106.     <mkdir dir="${DIST_DIR}/minimised" />
  107.     <mkdir dir="${DOCS_DIR}" />
  108.   </target>
  109. </project>

Summary

I hope others find this useful. There are a number of obivious improvements that can be made but I hope it serves to illustrate the general principles. Let me know you all think

rdfQuery 1.0 released

I travelled down to Oxford last weekend to attend the rdfQuery Dazzle event. Fairly early on we decided that one of the things we wanted to achieve was to get v1.0 of rdfQuery released. This involved a fair bit work, not only did we need to get unit tests working across all the major browsers, but we also wanted to fix some of the existing bugs and get documentation written. I spent the bulk of the weekend restructuring the unit test suite so the 1100 unit tests we have could be run together. I also introduced an ant script to make it easy to generate documentation, as well as create distributions. Jeni and Kal managed to get all the api documentation written, and got rdfQuery working in IE7, IE8, Firefox and Safari. It was a great effort by everyone involved.

We also spent time talking about some of the Ontology support that Jeni is thinking of adding to rdfQuery, which could be very useful. We also wanted to get some closer integration with the Talis Platform, so I’ve also been working on adding Changeset support. All in all it was a great weekend.

You can download and learn more about rdfQuery here.

We’re hiring …

We are currently recruiting for a number of different positions at Talis, amongst these are several openings to join our development teams. Rob has already discussed the Web Application Technical Lead role, and I’ll like to mention that we are also looking for Senior Developers to join both our Platform and Education divisions.

Senior Software Developer Education

It is the vision of our Education Division to connect faculties, students and educators together using technology, with the aim of creating joined up learning environments and providing seamless access to education resources and pedagogical expertise. We are currently working on delivering Talis Aspire into a number of Higher Education institutions here in the UK and abroad. Aspire is built entirely upon our semantic web platform, and is one of the few truly native linked data applications. Whilst the underlying technology is important, we have to balance this with an excellent user experience. If you are interested in being an early part of publishing large amounts of data on the semantic web? And building truly compelling software that provides an excellent user experience for millions of users, then you might be interested in applying. The job spec provides more details, we only ask that when applying you try to answer two of three questions below:

  1. Discuss the different types of automated testing that are needed to maintain high quality
    software. What kinds of programming language are best suited to each type of testing? What
    automated techniques could be used to test web based applications and user interfaces? And
    how can code design and refactoring be affected by choice of language?
  2. When working with data that you do not own, there are no guarantees about the cardinality of
    fields or the presence of data you might want to consider mandatory. Traditional approaches to
    working with data from elsewhere have relied on cleaning, validating and then importing data
    into an application’s own database. The Semantic Web allows data to be shared at runtime.
    What techniques or strategies could be employed within an application to handle unreliable or
    unexpected data when sharing databases with other applications at runtime?
  3. Web applications are often composed of multiple interoperating systems, connected by APIs or
    other endpoints, and deployed across multiple environments. As usage patterns change, the
    application may need to scale rapidly, whilst maintaining performance and reliability levels.
    How can applications be designed to allow for such scaling?

Senior Platform Developer

Our Platform division is always looking for experienced developers to join the team. The job spec for this role provides far more details but when applying we ask that you try and answer at least two of the following three questions:

  1. The Web can be modelled as a network of nodes labelled with URLs and connected by directed arcs. Suppose we want to find all the URLs linked to and from any given URL, and all the URLs that are linked from any two given URLs. What kind of data structures might be suitable for representing and querying a network with 10^8 nodes each having between 10 and 50 arcs?
  2. Discuss the different types of automated testing that are needed to maintain high quality software. What kinds of programming language are best suited to each type of testing? What techniques could be used for testing asynchronous processes and for processes that operate over large volumes of data? Are there any situations that you wouldn’t test?
  3. Large-scale systems composed of many cooperating application servers often need to share and cache configuration. Suppose any server can initiate changes that need to be reflected in real time to the other application servers in the cluster. What strategies could you use for coordinating this kind of behaviour and how are they tolerant to various failure conditions?

Finally I think Rob summed it up best when he said: All in all though, we’re looking for great people to come and help us do great stuff. Get in touch!