Michael Anissimov on ‘The Challenge of Self-Replication’

Michael Anissimov has an interesting post on how exactly humanity is going to be able to develop desktop-level self-replicating machines (which are no longer science fiction but an emerging reality) while at the sametime avoiding a doomsday scenario where someone decides to, say, use a self-replicating machine to pump out swarms of nanobots carrying weaponized small pox or something similar.

What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster. Currently Mumbai is under attack by unidentified terrorists — they are sacrificing their lives to kill, what, 125 people? I can envision a scenario in 2020 or 2025 that is far more destructive and results in the deaths of not hundreds, but millions or even billions of people. There are toxins with an LD50 of one nanogram per kilogram of body weight. A casualty count exceeding World War II could theoretically be achieved with just a single kilogram of toxin and several tonnes of delivery mechanisms. We know that complex robotics can exist on the microscopic scale — microwhip scorpions, parasitic wasps, fairyflies and the like — merely copying these designs without any intelligent thought will become possible when we can scan and construct on the atomic level. Enclosing every human being in an active membrane may be the only imaginable solution to this challenge. Offense will be easier than defense, as offense needs only to succeed once, even after a million failures.

. . .

Instead of just saying, “we’re screwed”, the clear course of action seems to be to contribute to the construction of a benevolent singleton. Given current resources, this should be possible in a few decades or less. Those who think that things will fall into place with the current political and economic order are simply fooling themselves, and putting their lives at risk.

Ah yes, when  terrorism meets script kiddies.

On the other hand, I’m not so sure his criticism of Kurzweil, etc. is warranted. It could just be me, but it seems likely that we’ll develop a matter replicator able to produce a nightmare doomsday scenario long before we achieve the ability to create the sort of singleton AI that Anissimov prescribes as a solution.

Regardless, both Anissimov’s post on the matter and the discussion in the comments is worth reading and pondering.

It’s The End of the World As We Know It – Ronald Bailey on Existential Threats

In July, Ronald Bailey wrote several articles for Reason while attending the Global Catastrophic Risks Conference in Oxford. You can read Bailey’s dispatches here, here and here.

The conference was sponsored by Oxford’s Future of Humanity Institute which is run by the always interesting Nick Bostrom. Bostrom opened the conference with perhaps the little bit of good news on existential threats — so far, none of them have come to pass,

The good news is that no existential catastrophe has happened. Not one. Yet.

On the other hand, depending on how far you want to go back to date the first homo sapien or homo sapien-like ancestor, the best explanation for this could be that we simply haven’t been around long enough to face an existential catastrophe. And, of course, the evidence supports the claim that at times the breeding population of our distant ancestors was reduced to extremely low levels.

Bostrom himself noted the debate surrounding the Toba super volcano eruption that some have speculated may reduced the human population to a few thousand people, though there is also some evidence that the reduction in population may not have been quite that severe.

According to Bailey, Bostrom argued the biggest existential threats facing humanity are self-induced,

Bostrom did note that people today are safer from small to medium threats than ever before. As evidence he cites increased life expectancy from 18 years in the Bronze Age to 64 years today (the World Health Organizations thinks it’s 66 years). And he urged the audience not to let future existential risks occlude our view of current disasters, such as 15 million people dying of infectious diseases every year, 3 million from HIV/AIDS, 18 million from cardiovascular diseases, and 8 million per year from cancer. Bostrom did note that, “All of the biggest risks, the existential risks are seen to be anthropogenic, that is, they originate from human beings.” The biggest risks include nuclear war, biotech plagues, and nanotechnology arms races. The good news is that the biggest existential risks are probably decades away, which means we have time to analyze them and develop countermeasures.

In his final dispatch from the conference, Bailey reported on Joseph Cirincione who spoke at the conference and noted how human civilization almost ended in 1995 due to, of all things, a Norwegian weather satellite,

With regard to the possibility of an accidental nuclear war, Cirincione pointed to the near miss that occurred in 1995 when Norway launched a weather satellite and Russian military officials mistook it as a submarine launched ballistic missile aimed at producing an electro-magnetic pulse to disable a Russian military response. Russian nuclear defense officials opened the Russian “football” in front of President Boris Yeltsin, urging him to order an immediate strike against the West. Fortunately, Yeltsin held off, arguing that it must be a mistake.

Cirincione noted that worldwide stockpiles of nuclear weapons have been reduced dramatically since the end of the Cold War, and the possibility for a worldwide disarmament of nuclear weapons is higher than at any time since 1945.

Bailey also reports on a few folks who presented the view that a strong AI and/or nanotechnology present serious existential risks, but the arguments presented there (at least as filtered through Bailey) seemed shallow,

In addition, an age of nanotech abundance would eliminate the majority of jobs, possibly leading to massive social disruptions. Social disruption creates the opportunity for a charismatic personality to take hold. “Nanotechnology could lead to some form of world dictatorship,” said [the Center for Responsible Nanotechnology’s Michael] Treder. “There is a global catastrophic risk that we could all be enslaved.”

Ok, but the reason jobs would be eliminated and this would be “an age of nanotech abudance” would be precisely that the little nanobots would be doing all the work. and the resulting goods would be essentially free. I guess if by “massive social disruptions” you mean everyone skiing and hanging out at the beach instead of working, then yeah, ok, but I doubt that’s going to lead to a worldwide dictator (who, as a reactionary, is probably going to want to force people to go back to work — about as attractive an offer as religious sects that demand celibacy).

Maybe it’s just me, but I’m worried about more abstract possibilites such as a devestating gamma ray burst which would wipe out all of humanity except for Bruce Banner. And there’s always that old standby, entropy.

My New Year’s Resolution — Less WoW

So 2005 will be known at my house as the year the MMO took over our lives.

Before some students I know engaged in an evil conspiracy to get me to install World of Warcraft, for example, typically I’d have lunch with my wife and we’d talk about our kids or what happened at work, etc.

Today, though a typical lunch goes like this:

Me: Damn. Last night I was in the Blasted Lands with some guildies when one of them aggroed a bunch of mobs and by the time we cleared ’em, some Undead Rogue bastard came and ganked us.

Lisa: Fucking horde. That reminds me of this time in Stranglethorn Vale when a mob debuffed my . . .

…and so on. Seriously, I remember we were getting into it at a Wendy’s and this small group of people was looking at us a table away going WTF?

Fortunately, I’ve finally gotten my character almost to level 60, and once I get to that point I can stop playing this game so damn much.

I can stop at anytime. Really. I just need to log in one more time to, uh, check my auctions. Yeah, that’s it.

Geek Millenarianism and The Singularity

Ray Kurzweil spoke here a few weeks ago, although I missed his speech. I also haven’t read his latest book, The Singularity Is Near, but this review/summary makes it sound like the typical transhumanist rantings.

The idea behind The Singularity is pretty straightforward. We’ve all seen how quickly technology has transformed our lives. Twenty years ago almost nobody owned a personal computer, much less a networked computer. Today, most of us routinely use such technology and often to accomplish tasks for which we previously wouldn’t have imagined we’d even want.

The transhumanists simply extrapolate that trend outward a couple decades. As the pace of improvement in computer processing power and other inevitable discoveries in the biological and physical sciences not only increase but accelerate, we will reach a point where what comes next is impossible to predict in principle (in math and physics a singularity is a point where the normal rules break down — for example, in a physical singularity, such as that hypothesized in black holes, the density of matter is infinite and normal mathematical solutions about space and time are undefined).

So, take computer power. Computer power could grow so fast that at this Singularity, a worldwide sentient computer life form arises and decides to wipe us all out a la The Terminator. Or maybe computer systems spontaneously organize that are able to solve problems that human cognitive limits cannot tackle and our computers start churning out plans for time machines or cornucopia devices (like Star Trek’s replicators on steroids with almost no resource limits to what can be manufactured).

Such ideas make for great science fiction. My personal favorites are the novels of Charles Stross. I just finished Stross’ Iron Sunrise which postulates a self-conscious AI entity which violates causality for its own purposes and works to prevent human beings from doing the same — sort of a benevolent computer demigod.

As I said, this makes great fiction, but when people start to take it seriously as not only a possible, but a likely future, it comes across as a geeky new form of Millenarianism — the ages old belief that the end of the world as we know it is right around the corner.

Part of the problem is that trends are often cited which are interesting and appear to show rapid progress, but also fail to note just how computationally difficult some tasks are, which might throw a bit of a cold shower on just how far ever-increasing computational power will get us (leaving aside the very real possibility of ultimate physical limits on computational power).

For example, the review linked to above and Kurzweil both positively cite the ability of computers that can pretty much defeat all but the most gifted human beings at chess. So far, though, there is no chess computer that can always win at the game against every human being.

And when you start to delve into the computational problems with chess you start to get an idea of how computationally difficult even relatively straightforward problems can be. Ideally, it would be nice to see a computer simply solve chess — i.e., the computer would have access to the tree of all possible moves and be able to determine a position for White that would always win or draw (in much the same way that a simple game like Tic Tac Toe is solved).

Good luck — the decision tree for chess is immense, as in 10^120 possible board positions immense. In contrast, there are believed to be only 10^75 atoms in the universe. If you have a computer the size of the universe with a few billion years to spare working at the problem, then you’ve got a shot at solving chess. Otherwise, forget about it.

It turns out even a relatively simple game such as checkers has an immense decision tree as well and may not be solvable in the forseeable future, though it is probably solvable with enough computer power and enough centuries to churn away at the problem.

For the difficulties in more important research, consider the well-known difficulties in computing protein folding problems even after they are greatly simplified.

There is also the issue of just how much longer the trend of cheaper, faster computing power can be maintained. As Gordon Moore, author of the much-misunderstood Moore’s Law, told TechWorld.Com earlier this year when asked how much longer current trends in increasing computer power could continue,

It can’t continue forever — the nature of exponentials is that you push them out and eventually disaster approaches. But in terms of size you can see that we’re approaching the size of atoms which is a fundamental barrier, but it will be two or generations of chip before we get that far.

Moore is also skeptical of the ability of nanocomputing and similar technologies to grow beyond specialized applications such as for bioanalytic tests.

The Singularity Is Fiction, Not Science

Glenn Reynolds has an odd, but all too typical, defense of the bizarre notion of the technological singularity. Addressing critics of singularity theology, Reynolds writes,

I’ve heard talk about the Singularity dismissed as “the rapture for nerds,” but I think that’s mere dismissal, and not very persuasive. It is, instead, an illustration of Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” I wrote a song about that, too, once but it wasn’t a hymn!

There might be some sort of argument in there, but I confess I cannot find it. Fortunately, Reynolds helps debunk the singularity nonsense by linking to a post by Phil Bowermaster. Bowermaster in turn quotes from singularity guru Ray Kurzweil’s book, The Singularity Is Near. Kurzweil nicely illustrates the sort of nonsense that lies just under the surface of most strains of singularity arguments,

Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love. In every monotheistic tradition God is likewise described as all of these qualities, only without limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably towards this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking.

This is fiction, not science. Kurzweil’s description of evolution as moving towards “greater levels of subtle attributes such as love” is simply Teilhard de Chardin’s teleological nonsense coated with a high-tech veneer. Instead of de Chardin’s Omega Point, we’re given the “singularity” which will bring transcendence.