Insanely Bad Ideas

Wired has put Lawrence Lessig’s April 2004 column up on their web site. Lessig’s column is roughly about the decreasing costs of terrorist acts and what free societies should do to minimize the possibility of such acts.

Before criticizing Lessig’s views, his column reminded me of one of my favorite episodes of The Outer Limits. Can’t remember the title, but the plot is that a physics student figures out a cheap, easy to reproduce way to unleash energy approximating a nuclear weapon. The student creates a bomb and plants it in a lecture hall with a blackmail proposal — bring three people the student hates outside the lecture hall and have them executed or else he’ll detonate the bomb. Of course the authorities don’t believe him initially, but he provides a demo that convinces them. At one point the young man, who repeatedly says that his discovery is both obvious and inevitable, speculates that this is why there is no evidence of sentient life — all advanced societies reach this level where anyone can easily manufacture mass destruction and decline and destruction then become just a matter of time.

Lessig posits a similar question to a class, asking them about how to deal with the possibility of easily created genetically modified small pox disease (what he calls Insanely Destructive Devices),

Like many professors, I think about hard questions by teaching a class. So I asked a local genius, Silicon Valley venture capitalist and polymath Steve Jurvetson, to help frame a course around the challenges raised by Joy. He opened the class with the smallpox example and asked how a society should protect itself from innovations that lead to pox viruses with 100-percent kill rates. What strategies does it adopt when everyone, even vaccinated health care workers, are vulnerable?

The first reaction of some in the class was positively Soviet. Science must be controlled. Publications must be reviewed before being printed. Communications generally may have to be surveilled – how else can we track down the enemy? And, of course, we must build a Star Wars-like shield to protect us, and issue to every American one of those space suits that CDC workers wear. (“Dear American: You may not have health insurance, but in case of a biological attack, please use the enclosed space suit.”)Like many professors, I think about hard questions by teaching a class. So I asked a local genius, Silicon Valley venture capitalist and polymath Steve Jurvetson, to help frame a course around the challenges raised by Joy. He opened the class with the smallpox example and asked how a society should protect itself from innovations that lead to pox viruses with 100-percent kill rates. What strategies does it adopt when everyone, even vaccinated health care workers, are vulnerable?

The first reaction of some in the class was positively Soviet. Science must be controlled. Publications must be reviewed before being printed. Communications generally may have to be surveilled – how else can we track down the enemy? And, of course, we must build a Star Wars-like shield to protect us, and issue to every American one of those space suits that CDC workers wear. (“Dear American: You may not have health insurance, but in case of a biological attack, please use the enclosed space suit.”)

But it didn’t take long to see the futility of these responses. GNR science doesn’t require huge labs. You might not be able to conceal the work in Manhattan, but you could easily hide it in the vast wilds of, say, Montana. Moreover, a great deal of important work would be lost if the government filtered everything – as would the essence of a free society. However comforting the Star Wars-like Virus Defense Initiative might be, engineered diseases would spread long before anyone could don a space suit.

But it didn’t take long to see the futility of these responses. GNR science doesn’t require huge labs. You might not be able to conceal the work in Manhattan, but you could easily hide it in the vast wilds of, say, Montana. Moreover, a great deal of important work would be lost if the government filtered everything – as would the essence of a free society. However comforting the Star Wars-like Virus Defense Initiative might be, engineered diseases would spread long before anyone could don a space suit.

First, my answer — if technology ever advances to the point where it is possible to genetically engineer diseases like this with minimal effort and there is no cocomittant increase in the ability to defend against such diseases, then humanity is doomed. These devices will be used and, if they are possible, their development is probably inevitable. Bill Joy’s “stop-the-research-I-want-to-get-off” approach won’t work.

Lessig and his students think there’s an out, however, but they make a crucial error in assuming that those who carry out terrorist acts or would engage in mass destruction are rational in the same way that Lessig and his students are rational,

Then one student suggested a very different approach. If we can’t defend against an attack, perhaps the rational response is to reduce the incentives to attack. Rather than designing space suits, maybe we should focus on ways to eliminate the reasons to annihilate us. Rather than stirring up a hornet’s nest and then hiding behind a bush, maybe the solution is to avoid the causes of rage. Crazies, of course, can’t be reasoned with. But we can reduce the incentives to become a crazy. We could reduce the reasonableness – from a certain perspective – for finding ways to destroy us.

The point produced a depressing recognition. There’s a logic to P2P threats that we as a society don’t yet get. Like the record companies against the Internet, our first response is war. But like the record companies, that response will be either futile or self-destructive. If you can’t control the supply of IDDs, then the right response is to reduce the demand for IDDs. Yet as everyone in the class understood, in the four years since Joy wrote his Wired piece, we’ve done precisely the opposite. Our present course of unilateral cowboyism will continue to produce generations of angry souls seeking revenge on us.

Unfortunately, Lessig doesn’t say how far he would go to reduce the risk from crazies and rational people. For example, how far should we go to reduce the resentment of people like Eric Rudolph, who appears to be an extremely rational terrorist? Is the key to preventing future Eric Rudolphs to “avoid the causes of rage” — namely, legal abortion — that motivate Rudolph and others like him?

Al Qaeda-connected terrorists have complained that the true enemy of Islam is secular democracy. How should we pursue lowering the resentment against secular democracy against fanatical Muslims? (Or even Christian Reconstructionists at home for that matter).

As Ronald Bailey puts it in an article on possible bioterrorist threats of the future,

Biodefense depends not on abandoning technology or appeasing our potential adversaries, but on nurturing a robust biotechnology. Remember, we are talking about “dual use” technologies—for both offense and defense.

Changing policies because the policies are bad or no longer effective is one thing, but changing them in some vain effort to minimize future terrorism is pointless.

Leave a Reply