Michael Anissimov has an interesting post on how exactly humanity is going to be able to develop desktop-level self-replicating machines (which are no longer science fiction but an emerging reality) while at the sametime avoiding a doomsday scenario where someone decides to, say, use a self-replicating machine to pump out swarms of nanobots carrying weaponized small pox or something similar.
What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster. Currently Mumbai is under attack by unidentified terrorists — they are sacrificing their lives to kill, what, 125 people? I can envision a scenario in 2020 or 2025 that is far more destructive and results in the deaths of not hundreds, but millions or even billions of people. There are toxins with an LD50 of one nanogram per kilogram of body weight. A casualty count exceeding World War II could theoretically be achieved with just a single kilogram of toxin and several tonnes of delivery mechanisms. We know that complex robotics can exist on the microscopic scale — microwhip scorpions, parasitic wasps, fairyflies and the like — merely copying these designs without any intelligent thought will become possible when we can scan and construct on the atomic level. Enclosing every human being in an active membrane may be the only imaginable solution to this challenge. Offense will be easier than defense, as offense needs only to succeed once, even after a million failures.
. . .
Instead of just saying, “we’re screwed”, the clear course of action seems to be to contribute to the construction of a benevolent singleton. Given current resources, this should be possible in a few decades or less. Those who think that things will fall into place with the current political and economic order are simply fooling themselves, and putting their lives at risk.
Ah yes, when terrorism meets script kiddies.
On the other hand, I’m not so sure his criticism of Kurzweil, etc. is warranted. It could just be me, but it seems likely that we’ll develop a matter replicator able to produce a nightmare doomsday scenario long before we achieve the ability to create the sort of singleton AI that Anissimov prescribes as a solution.
Regardless, both Anissimov’s post on the matter and the discussion in the comments is worth reading and pondering.