Daniel Suarez on why the kill decision should not belong to a robot

Daniel Suarez talk is one I think everyone should watch. The more I consider his words the more I’m convinced that he is right in calling for international ban on the development and deployment of autonomous killer robots. He makes many good points during the talk but here are the ones that really made me stop and think:

because as we migrate lethal decision-making from humans to software, we risk not only taking the humanity out of war, but also changing our social landscape entirely, far from the battlefield. That’s because the way humans resolve conflict shapes our social landscape … Now if responsibility and transparency are two of the cornerstones of representative government, autonomous robotic weapons could undermine both … And this is why we need an international treaty on robotic weapons, and in particular a global ban on the development and deployment of killer robots. Now we already have international treaties on nuclear and biological weapons, and, while imperfect, these have largely worked. But robotic weapons might be every bit as dangerous, because they will almost certainly be used, and they would also be corrosive to our democratic institutions.

Teaching kids to code

Really interesting talk, that demonstrates Greenfoot a development environment designed to make it easy for novices to learn programming in Java.

The students at high school levels might not be as commited to programming and we have tried to create an interesting program that should engage the student. In order to do this, we acknowledge that students has different opinions on what might be interesting. Hence, we sought to create a flexible environment that could be customised for the specific group of students. Furthermore, in the design of greenfoot we considered the different learning styles of students.

I’m not sure if I agree entirely with this approach but I do welcome anything that helps engage students. On reflection though the top down pedagogical approach to teaching and learning that Greenfoot encourages is something I am in favour of.

Adaptive Algorithms for Online Optimisation

ABSTRACT

The online learning framework captures a wide variety of learning problems. The setting is as follows – in each round, we have to choose a point from some fixed convex domain. Then, we are presented a convex loss function, according to which we incur a loss. The loss over T rounds is simply the sum of all the losses. The aim of most online learning algorithm is to minimize *regret* : the difference of the algorithm’s loss and the loss of the best fixed decision in hindsight. Unfortunately, in situations where the loss function may vary a lot, the regret is not a good measure of performance. We define *adaptive regret*, a notion that is a much better measure of how well our algorithm is adapting to the changing loss functions. We provide a procedure that converts any standard low-regret algorithm to one that provides low adaptive regret. We use an interesting mix of techniques, and use streaming ideas to make our algorithm efficient. This technique can be applied in many scenarios, such as portfolio management, online shortest paths, and the tree update problem, to name a few.

Pretty interesting tech talk, I found the notion of minimising regret quite interesting, but only really because I have heard of this before, but never experienced a real world implementation of this. I first heard of the significance of regret in learning from Alan who captured this vividly in an essay he wrote called The Adaptive Significance of Regret which he wrote back in 2005. In fact he even showed me some PHP code he wrote that modelled regret, which at the time I remember finding somewhat amusing … but right now it it feels far more significant.

Gears and the Mashup Problem

Abstract:

Abstract
Mashups are the most interesting innovation in software development in decades. … all ร‚ยป Unfortunately, the browser’s security model did not anticipate this development, so mashups are not safe if there is any confidential information in the page. Since virtually every page has at least some confidential information in it, this is a big problem. Google Gears may lead to the solution

… very interesting tech talk, I think I need to play around with Gears a bit more!

Science Commons

James Boyle gives a very interesting talk on Science Commons, which is a project within the Creative Commons movement which strives to remove unnecessary legal and technical barriers to the sharing of scientific materials in order to facilitate collaboration and innovation. Boyle gave another similar talk about 7 ways to ruin a technical revolution, and its well worth listening to both of these talks.

Science Commons was launched to expand the Creative Commons mission into the scientific … all ร‚ยป realm. James Boyle will be talking about two Science Commons projects: The Neurocommons and the Materials Transfer Project. The Materials Transfer Project uses standard machine readable licenses so that one day sharing biological materials between labs might be as easy as buying books from Amazon. If these words weren’t forbidden at Google, he’d describe the Neurocommons as a first draft of an open “semantic web” for neurology. The overall goal is to take some of the ingenuity we devote to allowing teenagers to flirt with each other online, or people to share and find mashups, and use it to reduce the transaction costs of science and make it selfishly beneficial for scientists to share more, and more easily.

Creativity: the mind, machines and mathematics

Having one of those nice quiet weekends where I get to catch up on some reading and a few webcasts! Came across this really interesting debate entitled the mind, machines and mathematics. The event was held last November on the 70th Anniversary of Alan Turing’s1 seminal paper “On Computable Numbers“. The purpose of the debate was to discuss the question: “Can we build super intelligent machines or are we limited to building super intelligent zombies?“.


Photo: Donna Coveney. From left Ray Kurzweil, Rodney Brooks and David Gelertner

The participants in the debate are David Gelernter, Professor of Computer Science at Yale, and Ray Kurzweil, a prodigious inventor and author of “The age of intelligent machines”. The debate is moderated by Rodney Brooks, the director of MIT’s Computer Science and Artificial Intelligence Laboratory.

Kurzweil takes the position that machines will achieve a level of human intelligence, whilst Gelernter takes the opposite anti-cognitivist stand point. I guess it’s not surprising that the key point of contention was defining consciousness, or actually whether it could be defined.

For example, Kurzweil makes the point that: “there is no consciousness detector that we can imagine creating … that doesn’t have some philosophical assumptions built into it.” to which Gelertner insists, perhaps rightly, that “you can’t possibly understand the human mind if you dont understand consciousness“. Listening to them argue about consciousness I chuckled as I was reminded immediately of a passage from the beginning of Dennett’s Consciousness Explained, which I plucked off my bookshelf and have transcribed parts of it here ( taken from the opening couple of pages of Chapter two: Explaining Consciousness):

Human consciousness is just about the last surviving mystery …There have been other great mysteries: the mystery of origin of the universe, the mystery of life and reproduction, the mystery of time, space and gravity. These were not just areas of scientific ignorance, but of utter bafflement and wonder. We do not yet have the final answers to any of the questions of cosmology and particle physics, molecular genetics and evolutionary theory but we do know how to think about them. The mysteries haven’t vanished, but they have been tamed … we know how to tell the misbegotten questions from the right questions, and even if we turn out to be dead wrong about some of the currently accepted answers, we know how to go about looking for better answers.

With consciousness, however, we are still in a terrible muddle. Consciousness stands alone today as a topic that often leaves even the most sophisticated thinkers tongue-tied and confused. And, as with all the earlier mysteries, there are many who insist – and hope – that there will never be a demystification of consciousness.

Dennett’s book is a wonderful read but its one of those texts that you have to persevere with since it can be fairly inaccessible, lets face it he combines psychology, philosophy, neuroscience along with many other areas of research so its small wonder some readers struggle with it. In fact I vividly recall one of the undergraduates on my AI course, years ago, who grew rather frustrated with it describing it as a head fuck. Which may not be entirely unfair …. anyway I digress … ๐Ÿ™‚

During the debate Gelernter goes on to argue that building a conscious mind “out of software seems to be virtually impossible“, since software by definition can be taken from one computer to another, “peeled off“, can be ported from one platform to another and run in a “logically identical way on any computing platform“, but “the mind cannot be ported to any other platform or even to an instance of the same platform“, and whilst consciousness is an emergent property running hugely complex programs with billions or trillions of processes but there is no reason to believe that consciousness would or even could emerge.

Kurzweil, rather optimistically perhaps, went to point out that “that’s because were thinking of software as it is today“, since information technology is expanding exponentially and continuing research into the human brain is revealing more about brain chemistry and neural functions. I guess the point he wanted to make was that a biological brain shifting chemicals around isn’t really that dissimilar to a computer that shifts symbols? Which is a pretty valid point since Gelernter also stated that “we don’t have the right to dismiss our of hand the role the chemical makeup of the brain plays in creating the emergent property of consciousness“.

Invariably any discussion on consciousness leads to the question of spirituality, which Gelernter defined as a “thirst for the living God” and he asks, and answers, the equally inevitable question, “can we build a robot with a physical need for a non -physical thing? maybe but don’t count on it. And forget software.

I did laugh out loud when Gelernter offers an answer to the question whether super intelligent conscious machines are desireable?

I think it’s desirable to learn about every part of a human being. But assembling a complete artificial human being is a different project. We might easily reach a state someday where we prefer the company of a robot from walmarts to our next door neighbours … but its sad that in a world where we tend to view such a large proportion of our fellow human beings as useless we are so hot to build new ones! In a western world that no longer cares to have children at the replacement rate we cant wait to make artificial humans – believe it or not but if we want more complete fully functional people we can have them right now, all natural ones, consult me afterwards and I’ll let you know how its done

I could provide a blow by blow account of the entire debate, but I won’t ๐Ÿ˜‰ I think you should watch it,enjoy it for yourselves, and form your own opinions. What I will say though is that It’s a wonderful little debate and the speakers are both engaging and seem to endeavour to inject a fair amount of humour into it which makes this a really entertaining and informative discussion to watch.

  1. AlanTuring.net, http://www.alanturing.net[back]

Google Tech Talk: Away with applications: The death of the desktop

The computer desktop metaphor is ubiquitous, but how much work do we get done there? None! … all Time is entirely wasted navigating or shuffling content to the application in which we can finally work. What lessons can we learn from designing interfaces without the desktop and without applications? Is it even possible? And how does this apply to the Web? Currently, Web applications are often more usable than their desktop-based counterparts because each one does one thing and does it well.

Aza Raskin gives this excellent talk which is really about human computer interaction and usability. For those who don’t know Aza is the son of Jeff Raskin the guy who started the Macintosh project at Apple.

Aza’s offers some very useful views on User interface design, he touches on GOMS Models, Cognetics, Habituation in a wonderfully easy to follow manner. In this talk he outlines how we can get rid of the application centric model which comes from the desktop design paradigm in order to free functionality that can be made accessible using a ZUI along with a universal method for accessing functionality.

Applications are like walled cities that hoard their functionality, but we need to give that functionality away so others can use it wherever they are. But to facilitate this Aza argues that we need a universal access interface. Web services give you a separation between the UI and the Data but up until now services are really available to developers, they’re not really intended for end users but can we expose them through CLI’s?. He proposes a synthesis between GUI’s and CLI’s and from what he says they’re having a great success some of the examples he shows are compelling. I for one can see the value of this. In fact we’ve already put it into practise about six months ago.

You see this was something Rob and I thought about when we developed Project Cenote, one of the features of the user interface is that the browser’s URL line is an interface in its own right. For example if you type this into the url line:

http://cenote.talis.com/author/gemmell

And the application will perform a search for all items that were authored by “gemmell”. So if your like me and you just want to get to the content your interested in you can use this as opposed to navigating around the site and entering search terms into a search box. It is basically a Command Line Interface, and I think this is a wonderful way of giving end users access to content without necessarily forcing them to always use a GUI.

I was amused when one Aza paraphrased Asimov’s Three laws of Robotics into Raskins Rules of Interfaces:
1. An interface shall not harm your content or, through inaction, allow your content to come to harm.
2. An interface shall not waste your time or require you to do more work than is strictly necessary.
3. An interface shall not allow itself to get into a state where it cannot manipulate content.

This is a great talk to listen to and full of some very useful tips.

Google Tech Talk : Flex, Flash and Apollo for Rich Internet Applications

ABSTRACT: James Ward, engineer and evangelist for Adobe’s Flex, Flash and Apollo technologies, will demonstrate their use for very rich user experiences in internet applications. Topics covered will include ECMAscript, the recent open source donation of the scripting engine to the Apache Tamarin project, Apollo (the standalone execution environment for running desktop applications written in flash and HTML) and much more

Google Tech Talk: Faith, Evolution and Programming Languages

Faith and evolution provide complementary–and sometimes conflicting–models of the world, and they also can model the adoption of programming languages. Adherents of competing paradigms, such as functional and object-oriented programming, often appear motivated by faith. Families of related languages, such as C, C++, Java, and C#, may arise from pressures of evolution. As designers of languages, adoption rates provide us with scientific data, but the belief that elegant designs are better is a matter of faith…

This is a wondeful talk by Phillip Wadler from the University of Edinburgh, he’s one of the individuals responsible for getting Generics into Java 5, and has worked on Haskell and very heavily on the development of functional programming languages throughout his career.

It’s suprising how well the evolution vs faith analogy applies to the way in which we, as developers, often adopt programming languages. For some reason the talk made me remember the old Java vs .NET arguments which were less about rationale differences in the semantics and philosophy of the programming language and more about which camp you belonged to and your unswerving faith and loyalty to it. In fact thats a poignant example of when multiculturalism went out of the window and fundementalism was very much in fashion.

The talk also provides a fair amount of history around some of the issues that polorised language designers, static vs dynamic typing, for example. I found this provided some wonderful background that I was never aware of.

If your interested in Programming Languages, their adoption and their evolution over time then this is a fascinating, and unique, talk that you should really watch.

Google Tech Talk: Java on Guice: Dependency Injection, the Java Way

Here’s a really interesting talk about how to use Guice, a new open source dependency injection framework for Java by Google. Here’s a link to the user-guide which explains, using a example, why Guice might be a great alternative to using static references, or factory patterns when writing unit tests. I haven’t used Guice yet but i have written many unit tests for services that need to pass in Mocked services using the factory pattern, so I can immediately see the benefit of a framework like Guice.

I’m going to delve deeper into it, but I recommend watching the tech talk, they work through a simple example and it does sound very useful.