Marriage of Brain and Computer

I really enjoyed this talk by
Stuart Hameroff
. I’ve been following some of his research over at http://www.quantumconsciousness.org/ for a while now, this talk is a wonderful introduction to his current research in trying to understand the nature of consciousness, and how to model this in a computer architecture … so that Consciousness itself might be an emergent output of some kind of computation. In many ways this presentation provides an excellent introduction to neuroscience has tried to understand the workings of the human brain and how this resulted in our understanding and building of artificial neural networks.

Creativity: the mind, machines and mathematics

Having one of those nice quiet weekends where I get to catch up on some reading and a few webcasts! Came across this really interesting debate entitled the mind, machines and mathematics. The event was held last November on the 70th Anniversary of Alan Turing’s1 seminal paper “On Computable Numbers“. The purpose of the debate was to discuss the question: “Can we build super intelligent machines or are we limited to building super intelligent zombies?“.


Photo: Donna Coveney. From left Ray Kurzweil, Rodney Brooks and David Gelertner

The participants in the debate are David Gelernter, Professor of Computer Science at Yale, and Ray Kurzweil, a prodigious inventor and author of “The age of intelligent machines”. The debate is moderated by Rodney Brooks, the director of MIT’s Computer Science and Artificial Intelligence Laboratory.

Kurzweil takes the position that machines will achieve a level of human intelligence, whilst Gelernter takes the opposite anti-cognitivist stand point. I guess it’s not surprising that the key point of contention was defining consciousness, or actually whether it could be defined.

For example, Kurzweil makes the point that: “there is no consciousness detector that we can imagine creating … that doesn’t have some philosophical assumptions built into it.” to which Gelertner insists, perhaps rightly, that “you can’t possibly understand the human mind if you dont understand consciousness“. Listening to them argue about consciousness I chuckled as I was reminded immediately of a passage from the beginning of Dennett’s Consciousness Explained, which I plucked off my bookshelf and have transcribed parts of it here ( taken from the opening couple of pages of Chapter two: Explaining Consciousness):

Human consciousness is just about the last surviving mystery …There have been other great mysteries: the mystery of origin of the universe, the mystery of life and reproduction, the mystery of time, space and gravity. These were not just areas of scientific ignorance, but of utter bafflement and wonder. We do not yet have the final answers to any of the questions of cosmology and particle physics, molecular genetics and evolutionary theory but we do know how to think about them. The mysteries haven’t vanished, but they have been tamed … we know how to tell the misbegotten questions from the right questions, and even if we turn out to be dead wrong about some of the currently accepted answers, we know how to go about looking for better answers.

With consciousness, however, we are still in a terrible muddle. Consciousness stands alone today as a topic that often leaves even the most sophisticated thinkers tongue-tied and confused. And, as with all the earlier mysteries, there are many who insist – and hope – that there will never be a demystification of consciousness.

Dennett’s book is a wonderful read but its one of those texts that you have to persevere with since it can be fairly inaccessible, lets face it he combines psychology, philosophy, neuroscience along with many other areas of research so its small wonder some readers struggle with it. In fact I vividly recall one of the undergraduates on my AI course, years ago, who grew rather frustrated with it describing it as a head fuck. Which may not be entirely unfair …. anyway I digress … ๐Ÿ™‚

During the debate Gelernter goes on to argue that building a conscious mind “out of software seems to be virtually impossible“, since software by definition can be taken from one computer to another, “peeled off“, can be ported from one platform to another and run in a “logically identical way on any computing platform“, but “the mind cannot be ported to any other platform or even to an instance of the same platform“, and whilst consciousness is an emergent property running hugely complex programs with billions or trillions of processes but there is no reason to believe that consciousness would or even could emerge.

Kurzweil, rather optimistically perhaps, went to point out that “that’s because were thinking of software as it is today“, since information technology is expanding exponentially and continuing research into the human brain is revealing more about brain chemistry and neural functions. I guess the point he wanted to make was that a biological brain shifting chemicals around isn’t really that dissimilar to a computer that shifts symbols? Which is a pretty valid point since Gelernter also stated that “we don’t have the right to dismiss our of hand the role the chemical makeup of the brain plays in creating the emergent property of consciousness“.

Invariably any discussion on consciousness leads to the question of spirituality, which Gelernter defined as a “thirst for the living God” and he asks, and answers, the equally inevitable question, “can we build a robot with a physical need for a non -physical thing? maybe but don’t count on it. And forget software.

I did laugh out loud when Gelernter offers an answer to the question whether super intelligent conscious machines are desireable?

I think it’s desirable to learn about every part of a human being. But assembling a complete artificial human being is a different project. We might easily reach a state someday where we prefer the company of a robot from walmarts to our next door neighbours … but its sad that in a world where we tend to view such a large proportion of our fellow human beings as useless we are so hot to build new ones! In a western world that no longer cares to have children at the replacement rate we cant wait to make artificial humans – believe it or not but if we want more complete fully functional people we can have them right now, all natural ones, consult me afterwards and I’ll let you know how its done

I could provide a blow by blow account of the entire debate, but I won’t ๐Ÿ˜‰ I think you should watch it,enjoy it for yourselves, and form your own opinions. What I will say though is that It’s a wonderful little debate and the speakers are both engaging and seem to endeavour to inject a fair amount of humour into it which makes this a really entertaining and informative discussion to watch.

  1. AlanTuring.net, http://www.alanturing.net[back]

The brain scan that read people’s intentions

Came across this article on the Guardian online.

A team of world-leading neuroscientists has developed a powerful technique that allows them to look deep inside a person’s brain and read their intentions before they act

When I read the headline the first thought that sprung to mind was 1984 closely followed by Minority Report. It reveals how far neuroscience is progressing but an urgent debate is needed on the ethical issues surrounding such technologies.

The idea of being able to control a computer with your mind, or a wheelchair on the face of it sounds quite appealing and advocates of this technology argue that it could have many such benefits.

Detractors maintain that such technology could be used to create an Orwellian style society. This kind of technology has the potential to change society, and we need to understand and encourage debate around its ethical use:

“Do we want to become a ‘Minority Report’ society where we’re preventing crimes that might not happen? For some of these techniques, it’s just a matter of time. It is just another new technology that society has to come to terms with and use for the good, but we should discuss and debate it now because what we don’t want is for it to leak into use in court willy nilly without people having thought about the consequences” Barbara Sahakian,Professor Neuro-Psychology at Cambridge

“These techniques are emerging and we need an ethical debate about the implications, so that one day we’re not surprised and overwhelmed and caught on the wrong foot by what they can do. These things are going to come to us in the next few years and we should really be prepared,” Professor John Dylan-Haynes

Top 100 Alternative Search Engines

Charles S. Knight has a compiled his list of the Top 100 Alternative Search Engines. The article makes for an interesting read as he describes his method of analysis by comparing them to Google under a set of categories he defines.

What made me grin the most was the reference at the end of the article to Asimov’s The Last Question, an excellent short story that wonderfully suggests an answer to the question “is Google’s mission to organize the world’s information and make it universally accessible and useful, a fait accompli“. In fact for making that observation/link alone Charles gets a thumbs up from me.

Google Tech Talk: Are there search-engine distruptive ideas?

An excellent tech talk about what kinds of technology could be potentially disruptive to Google, and how it to understand it and how to turn these into positives.

The talk focuses around the democratization ofร‚ย  information and the cultural implications of this. What I found really interesting is that this talk touched on several points that Alan made during his talks to our group yesterday about providing better semantic support for users trying to find information.

It’s a truly fascinating talk and if your working with search engines do watch this talk!

… "learning a new programming language is just about learning a new syntax"

It always grates on my nerves when I hear someone say that …because there’s a part of me that simply can’t bring myself to believe that it’s a true assertion. Rob and I discussed the point after he interviewed a candidate last week who had made this assertion, and asked me for my opinion. Ok the strength of my response does worry me a little, I’m not sure whether I’ve built a fence around this issue – that’s why I’m writing this to see if I’ve overlooked something. It’s certainly isn’t the first time I have heard this view expressed, in fact I’ve heard it a lot, and there’s a part of me that’s beginning to wonder whether there’s any validity to this assertion … so I’m hoping someone can tell me why it’s true. Me, I’m going to stick to my guns for now and try to explain why I think it’s wrong.

Here goes…

I think each language has its own philosophy, it’s reason for being different to every other language. Some languages were designed with a particular types of applications in mind for, example Fortran, which as a language was widely adopted by scientists for writing numerically intensive programs. Some languages are developed to be general purpose or universal enough to serve all purposes ( like Java for example ), whilst many languages try to be this, I don’t believe any really have.

The truth (ok as I see it ) is that there are many many programming languages out there and many more being developed. Some fall into disuse, others evolve or are extended to meet the ever changing needs of the communities that use them. Perhaps the need for this diversity arises from the diversity of contexts in which languages are used.

So I think you have to understand the contexts, and try to understand the reason why a particular programming language might be better suited for solving a problem than another, in order to do this I believe It’s important that you try to understand the philosophy behind that language. For me why is more important than how.

Yes, of course, you have to learn the syntax and the semantics of the language. But just understanding the syntax isn’t enough, you have to understand its nuisances, it’s idioms. Think of it as trying to teach a computer to speak a language. You can teach it correct grammar, but that doesn’t means the sentences it comes up with will make any sense … even though they’re grammatically correct.

As programmers we can be like that too, that’s why when we have a good understanding of a language and we move to using a different language we often fall into the trap of applying the idioms of the one to the other … or simply assume that we should be able to. A classic example of this is when a Java developer writes an object in C# and automatically starts writing public getter and setter accessor methods for each property he/she wishes to make accessible, when anyone who understands C# knows that whilst this works, C# has a completely different philosophy of how you expose Properties. It’s a crude example but I think it illustrates the point I’m trying to make, or at least I hope it does. In any case I believe the following is certainly true:

A language that doesn’t affect the way you think about programming is not worth knowing.

— Alan Perlis

Alan Perlis was the first ever recipient of the Turing Award in 1966, I often remember this quote because I recall how some of the students on my AI degree course used to complain that we had to learn way more languages than the students doing the normal Software Engineering degree. One of our Professors Aaron Sloman, used this quote to try to explain why it was important to be exposed to a wider range of languages better suited to the problems we were attempting to solve, and that this should change the way we think about programming. Anyway Perlis has written many often-quoted epigrams you can find some more here.

"…you dumbass…" … dreams within dreams … and Descartes

Had a terrible evening last night all started when I got on the wrong train at New Street and ended up in the middle of no-where. took me four hours to get home in the pouring rain. It was cold, windy and wet! I must admit I was a tad pissed off had all sorts of thoughts going through my head … stupid rain, stupid trains, stupid universe, … god must hate me well I’ll hate him back see how he likes that! blah blah blah.

Anyway as I neared my place I was actually pretty wound up and shivering, then suddenly this cat jumps out in front of me (makes me jump out of my skin!) and runs under this parked car to shelter from the rain I guess. Anyway I remember standing there momentarily looking up at the sky and laughing and saying out loud “oh well it, at least I’m not naked!” … ridiculous I know but it made me laugh, it’s a good job no-one was around I’d have sounded like a nutter!

Anyway I must have got home around 9:15, and figured I was way too tired to cook, and I was too tired to order anything in so I chucked a load of fruit, ice and milk into my really cool blender, 60 seconds later instant smoothie! Had that, thought about watching TV but decided I was too tired for that too so I trundled off to be around 9:45.

Anyway I had the strangest night. I had one of those really weird dreams where your actually having a dream within a dream within a dream. Not sure how or why that happens but its a bit bizarre. I don’t actually remember too much about the dreams, in terms of the details but its just the weird idea that I woke up from a dream to realise I was still in another dream, and then when I woke up in that dream I was still in another one! Finally when I did wake up I just lay there wondering whether I was going to wake up again … is it me or is that just freaky?

Curiously it got me thinking about something Descartes wrote in his Meditations on First Philosophy( which I still think is heavily influenced by Plato’s Allegory of the Cave, I know many who disagree with me on that but I think theres definitely strong parallels ). Descartes attempts to find a way undermine all of his own beliefs. He does this by considering whether he is mad, dreaming or being influenced by a powerful demon, the idea being that if any of these scenarios were the case then many of his beliefs would be false. Descartes writings are often fairly inaccessible probably because of the way his works have been translated .. however in modern philosophy Descartes little experiment is the basis for the brain in a vat thought experiment, which most people find far easier to relate to.


Image source:Wikipedia

The brain in the vat experiment, in simple terms, asks us to consider the questions a) how do we know that what we are experiencing is actually real. b) if what we believe is a result of what we experience, and we cant be sure if what we experience is real, then can our beliefs be true? The experiment asks us to imagine the scenario that a brain in a vat is connected to a computer that provides all the identical electrical impulses the brain normally receives. The computer would then be simulating a kind of virtual reality but the disembodied brain would never realise this. One of the better dramatisations of this relatively recently was the Matrix movie which I’g guessing most people have seen.

Anyway I think its interesting food for thought.

Oh yeah … as for the dumbass bit … well on the way to work this morning, as with most mornings, I tend to get on the same bus with one of my colleagues, Amanda. She asked me if I’d had a good evening and I told her about my 4.5 hour trip home last night to which she replied … “oh Nad … your such a dumbass” …gee thanks Mandy! I’ll remember that!

Emotion-aware teaching software tracks students

Came across this interesting little article over at NewScientist.com. The basic idea is that students wear rings fitted with sensors that monitor heart rate, blood pressure and changes in electrical resistance caused by perspiration, which it then transmits to a computer that assesses the wearer’s emotional state – and judges whether they are interested and keeping up or bored and struggling.

I’m not too sure about this, whilst there is commonality in terms of the biological responses the body generates in all of us the extent to which these manifest vary from individual to individual so I’m not sure how accurate the system is going to be in widespread use. Reciprocally if it was accurate then what would the social implications be of a technology that could assess our emotional state and take remedial action?

"Computers are sequential, brains are associative" …

In Alan’s latest blog posting he’s been discussing his thoughts around the power of sequential thinking. It was one of the topics we discussed when he came to visit me before Christmas., I’m glad he’s written his thoughts down, I remember struggling with some of the finer points when we were talking about it that evening. This entire debate originally began from the deliberately provocative suggestion Alan made to a collegue that the computational power of the complete internet is now roughly similar to that of a single human brain, something I mentioned in a previous posting.

His colleague like many people dismissed the hypothesis as impossible because computers are indeed sequential and brains are associative, yet as Alan describes in his posting the very nature of how the brain deals with sequentiality that is in itself amazing. Although our brains are massively parallel few of us can actually consciously think about more than one thing at a time. We think about one thing and then another but as our attention shifts so too do all the mental associations that we make and all of this happens incredibly quickly but the point is it’s still sequential.

I can’t help but think Alan might be onto something when he says:

… slower timescales that allow fuller webs of association to build and decay, but maybe there are other intermediate timescales of attention switching as well.
If this is right then the rapid sequential shifts of attention could be essential for maintaining the individual identity of percepts and concepts.

the human brain and the web ( how similar are they? )

Had an awesome evening with a wonderful friend the other day. Amongst all the other things we chatted about, Alan tried to explain to me how he felt the human brain and the web were very similar. We were having this discussion at La Tasca, a spanish tapas bar in Star City. I’m not going suggest that I fully understood what Alan was saying at the time, it was getting very late and one of the waitresses was distracting me ๐Ÿ˜‰ ( Hey Alan, thats my excuse and im sticking to it ).

Anyway, knowing Alan as well as I do. I figured he must have published his little theory at some point, and naturally he has and you can read all about it here. We kind of got onto this discussion because during the course of our ramblings that evening I’d mentioned Strong AI, and de Garis’ work on the now defunct (presumably?) CAM Brain building project.

Alan picked up on this and said “Did I ever tell you about my theory of … “. I think the ability to share ideas and teach others in an off hand, easy, and almost anecdotal manner is something that seperates great teachers from good teachers. If I had sum Alan up I’d have to say that he’s not only a wonderful friend, an incredible mentor, but one of the finest academics and professors I have ever had the privilege to learn from.

Anyway getting back on track …. Alan’s theory revolves around his postulation that the entire internet as we currently know it has roughly the same computational power / capacity as a single human brain. So what does this mean? Well we spent a while debating whether this meant that mimicking biological processes artificially could result in the same kind of emergent consciousness human beings have. It’s safe to say we were dubious about this, and although I’d love to offer my own words on why … Alan summed it up wonderfully in his article …

Philosophers of mind and identity have long debated whether our sense of mind, personhood or consciouness are intrinsic to our biological nature or whether a computer system emulating the brain would have the same sense of consciouness as an emergent property of its complexity … we are nearing the point when this may become an empirically testable issue!

Of course, this does not mean that the web or a new super computer in some way is like or equal to the human mind. What it does mean is that the specialness of the human brain is not because of simple capacity or speed. If size were all that matters in cognition, we have already been beaten by our own creations. Really the specialness of our minds is in their organisation and the things that make us human beyond simple information: compassion, pain, heroism, joy รขโ‚ฌโ€œ we are indeed fearfully and wonderfully made.