How Accurate Are Ray Kurzweil’s Predictions?

Back in September, while announcing that Ray Kurzweil would appear at an event, Peter Diamandis linked to this Wikipedia page claiming that Kurzweil’s futurist predictions have an 86 percent accuracy rate. Kurzweil himself has defended the accuracy of his predictions, though I’m unsure if he agrees with the content of the Wikipedia page in question.

Personally, I’m a huge fan of Kurzweil and want to believe his predictions, but am always mindful of Richard Feynman’s admonition that “The first principle is that you must not fool yourself – and you are the easiest person to fool.”

So lets take a look at a single prediction that Kurzweil made his 1999 book, The Age of Spiritual Machines, and look at how a critic of his characterizes the prediction and how Kurzweil responds.

In a Forbes article from 2012, Alex Knapp argues that Kurzweil’s predictions in The Age of Spiritual Machines were mostly wrong. Knapp divides Kurzweil’s predictions to into failed predictions, partially met predictions, and prediction met. One of the “failed predictions” is this:

“Human musicians routinely jam with cybernetic musicians.”

Knapp notes, correctly in my opinion, that this simply did not happen.

Most programs that ‘create’ music are pretty bad – and musicians don’t jam with them.

Kurzweil responds, however, that believes that his prediction was correct and that human musicians do routinely jam with cybernetic musicians.

Knapp mentions music accompanist software that he finds impressive but still rates this prediction as “wrong.” I cite many more popular applications (in the predictions essay cited above) where people jam with their computers. For example, anyone hear of guitar hero?

In this response, Kurzweil shows he’s little different than most people who prognosticate about the future. He makes very general predictions such that it is relatively easy 10 years later to find something that corresponds in some way to the earlier, vague prediction.

“Human musicians routinely jam with cybernetic musicians” becomes “people play rhythm video games.”

The reader can judge for themselves whether this is a reasonable equivocation, but it should also be noted that if it is, then Kurzweil’s prediction was fulfilled almost immediately after publication of his book.

The Age of Spiritual Machines was published on January 1, 1999. Less than two months later, on February 16, 1999, Konami released the first version of its GuitarFreaks arcade game.

Michael Anissimov on ‘The Challenge of Self-Replication’

Michael Anissimov has an interesting post on how exactly humanity is going to be able to develop desktop-level self-replicating machines (which are no longer science fiction but an emerging reality) while at the sametime avoiding a doomsday scenario where someone decides to, say, use a self-replicating machine to pump out swarms of nanobots carrying weaponized small pox or something similar.

What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster. Currently Mumbai is under attack by unidentified terrorists — they are sacrificing their lives to kill, what, 125 people? I can envision a scenario in 2020 or 2025 that is far more destructive and results in the deaths of not hundreds, but millions or even billions of people. There are toxins with an LD50 of one nanogram per kilogram of body weight. A casualty count exceeding World War II could theoretically be achieved with just a single kilogram of toxin and several tonnes of delivery mechanisms. We know that complex robotics can exist on the microscopic scale — microwhip scorpions, parasitic wasps, fairyflies and the like — merely copying these designs without any intelligent thought will become possible when we can scan and construct on the atomic level. Enclosing every human being in an active membrane may be the only imaginable solution to this challenge. Offense will be easier than defense, as offense needs only to succeed once, even after a million failures.

. . .

Instead of just saying, “we’re screwed”, the clear course of action seems to be to contribute to the construction of a benevolent singleton. Given current resources, this should be possible in a few decades or less. Those who think that things will fall into place with the current political and economic order are simply fooling themselves, and putting their lives at risk.

Ah yes, when  terrorism meets script kiddies.

On the other hand, I’m not so sure his criticism of Kurzweil, etc. is warranted. It could just be me, but it seems likely that we’ll develop a matter replicator able to produce a nightmare doomsday scenario long before we achieve the ability to create the sort of singleton AI that Anissimov prescribes as a solution.

Regardless, both Anissimov’s post on the matter and the discussion in the comments is worth reading and pondering.