Ray Kurzweil spoke here a few weeks ago, although I missed his speech. I also haven’t read his latest book, The Singularity Is Near, but this review/summary makes it sound like the typical transhumanist rantings.
The idea behind The Singularity is pretty straightforward. We’ve all seen how quickly technology has transformed our lives. Twenty years ago almost nobody owned a personal computer, much less a networked computer. Today, most of us routinely use such technology and often to accomplish tasks for which we previously wouldn’t have imagined we’d even want.
The transhumanists simply extrapolate that trend outward a couple decades. As the pace of improvement in computer processing power and other inevitable discoveries in the biological and physical sciences not only increase but accelerate, we will reach a point where what comes next is impossible to predict in principle (in math and physics a singularity is a point where the normal rules break down — for example, in a physical singularity, such as that hypothesized in black holes, the density of matter is infinite and normal mathematical solutions about space and time are undefined).
So, take computer power. Computer power could grow so fast that at this Singularity, a worldwide sentient computer life form arises and decides to wipe us all out a la The Terminator. Or maybe computer systems spontaneously organize that are able to solve problems that human cognitive limits cannot tackle and our computers start churning out plans for time machines or cornucopia devices (like Star Trek’s replicators on steroids with almost no resource limits to what can be manufactured).
Such ideas make for great science fiction. My personal favorites are the novels of Charles Stross. I just finished Stross’ Iron Sunrise which postulates a self-conscious AI entity which violates causality for its own purposes and works to prevent human beings from doing the same — sort of a benevolent computer demigod.
As I said, this makes great fiction, but when people start to take it seriously as not only a possible, but a likely future, it comes across as a geeky new form of Millenarianism — the ages old belief that the end of the world as we know it is right around the corner.
Part of the problem is that trends are often cited which are interesting and appear to show rapid progress, but also fail to note just how computationally difficult some tasks are, which might throw a bit of a cold shower on just how far ever-increasing computational power will get us (leaving aside the very real possibility of ultimate physical limits on computational power).
For example, the review linked to above and Kurzweil both positively cite the ability of computers that can pretty much defeat all but the most gifted human beings at chess. So far, though, there is no chess computer that can always win at the game against every human being.
And when you start to delve into the computational problems with chess you start to get an idea of how computationally difficult even relatively straightforward problems can be. Ideally, it would be nice to see a computer simply solve chess — i.e., the computer would have access to the tree of all possible moves and be able to determine a position for White that would always win or draw (in much the same way that a simple game like Tic Tac Toe is solved).
Good luck — the decision tree for chess is immense, as in 10^120 possible board positions immense. In contrast, there are believed to be only 10^75 atoms in the universe. If you have a computer the size of the universe with a few billion years to spare working at the problem, then you’ve got a shot at solving chess. Otherwise, forget about it.
It turns out even a relatively simple game such as checkers has an immense decision tree as well and may not be solvable in the forseeable future, though it is probably solvable with enough computer power and enough centuries to churn away at the problem.
For the difficulties in more important research, consider the well-known difficulties in computing protein folding problems even after they are greatly simplified.
There is also the issue of just how much longer the trend of cheaper, faster computing power can be maintained. As Gordon Moore, author of the much-misunderstood Moore’s Law, told TechWorld.Com earlier this year when asked how much longer current trends in increasing computer power could continue,
It can’t continue forever — the nature of exponentials is that you push them out and eventually disaster approaches. But in terms of size you can see that we’re approaching the size of atoms which is a fundamental barrier, but it will be two or generations of chip before we get that far.
Moore is also skeptical of the ability of nanocomputing and similar technologies to grow beyond specialized applications such as for bioanalytic tests.