Future of Humanity Institute New Book ‘Human Enhancement’

Human Enhancement CoverThe Future of Humanity Institute has put together a book to be published by Oxford Press this year, Human Enhancement. According to the FHI’s website,

To what extent should we use technology to try to make better human beings? Because of the remarkable advances in biomedical science, we must now find an answer to this question.

Human enhancement aims to increase human capacities above normal levels. Many forms of human enhancement are already in use. Many students and academics take cognition enhancing drugs to get a competitive edge. Some top athletes boost their performance with legal and illegal substances. Many an office worker begins each day with a dose of caffeine. This is only the beginning. As science and technology advance further, it will become increasingly possible to enhance basic human capacities to increase or modulate cognition, mood, personality, and physical performance, and to control the biological processes underlying normal aging. Some have suggested that such advances would take us beyond the bounds of human nature.

These trends, and these dramatic prospects, raise profound ethical questions. They have generated intense public debate and have become a central topic of discussion within practical ethics. Should we side with bioconservatives, and forgo the use of any biomedical interventions aimed at enhancing human capacities? Should we side with transhumanists and embrace the new opportunities? Or should we perhaps plot some middle course?

The first chapter is available online in PDF.

Future of Humanity Institute’s Roadmap Report on Whole Brain Emulation

The Future of Humanity Institute has released a technical report on the feasability of whole brain emulation (PDF) — reproducing the brain, and hopefully the mind, in a computer medium.

The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original, that when run on appropriate hardware, it will behave in essentially the same way as the original.

On the one hand, co-authors Anders Sandberg and Nick Bostrom do an excellent job of outlining just how speculative WBE still is and the plethora of breakthroughs and advances that would be required just to move backing up your brain from scifi idea to even remotely possible.

Computing power turns out to be one of the likely limiting factors as at present the processing power need to handle all of the neurons is insanely large. On the other hand, as Sandberg and Bostrom note we already know of at least one machine that is perfectly cable of running such enormously large parallel computations — the brain itself — so we already know it is possible. We just have to figure out an efficient way to reverse engineer the damn thing.

It’s The End of the World As We Know It – Ronald Bailey on Existential Threats

In July, Ronald Bailey wrote several articles for Reason while attending the Global Catastrophic Risks Conference in Oxford. You can read Bailey’s dispatches here, here and here.

The conference was sponsored by Oxford’s Future of Humanity Institute which is run by the always interesting Nick Bostrom. Bostrom opened the conference with perhaps the little bit of good news on existential threats — so far, none of them have come to pass,

The good news is that no existential catastrophe has happened. Not one. Yet.

On the other hand, depending on how far you want to go back to date the first homo sapien or homo sapien-like ancestor, the best explanation for this could be that we simply haven’t been around long enough to face an existential catastrophe. And, of course, the evidence supports the claim that at times the breeding population of our distant ancestors was reduced to extremely low levels.

Bostrom himself noted the debate surrounding the Toba super volcano eruption that some have speculated may reduced the human population to a few thousand people, though there is also some evidence that the reduction in population may not have been quite that severe.

According to Bailey, Bostrom argued the biggest existential threats facing humanity are self-induced,

Bostrom did note that people today are safer from small to medium threats than ever before. As evidence he cites increased life expectancy from 18 years in the Bronze Age to 64 years today (the World Health Organizations thinks it’s 66 years). And he urged the audience not to let future existential risks occlude our view of current disasters, such as 15 million people dying of infectious diseases every year, 3 million from HIV/AIDS, 18 million from cardiovascular diseases, and 8 million per year from cancer. Bostrom did note that, “All of the biggest risks, the existential risks are seen to be anthropogenic, that is, they originate from human beings.” The biggest risks include nuclear war, biotech plagues, and nanotechnology arms races. The good news is that the biggest existential risks are probably decades away, which means we have time to analyze them and develop countermeasures.

In his final dispatch from the conference, Bailey reported on Joseph Cirincione who spoke at the conference and noted how human civilization almost ended in 1995 due to, of all things, a Norwegian weather satellite,

With regard to the possibility of an accidental nuclear war, Cirincione pointed to the near miss that occurred in 1995 when Norway launched a weather satellite and Russian military officials mistook it as a submarine launched ballistic missile aimed at producing an electro-magnetic pulse to disable a Russian military response. Russian nuclear defense officials opened the Russian “football” in front of President Boris Yeltsin, urging him to order an immediate strike against the West. Fortunately, Yeltsin held off, arguing that it must be a mistake.

Cirincione noted that worldwide stockpiles of nuclear weapons have been reduced dramatically since the end of the Cold War, and the possibility for a worldwide disarmament of nuclear weapons is higher than at any time since 1945.

Bailey also reports on a few folks who presented the view that a strong AI and/or nanotechnology present serious existential risks, but the arguments presented there (at least as filtered through Bailey) seemed shallow,

In addition, an age of nanotech abundance would eliminate the majority of jobs, possibly leading to massive social disruptions. Social disruption creates the opportunity for a charismatic personality to take hold. “Nanotechnology could lead to some form of world dictatorship,” said [the Center for Responsible Nanotechnology’s Michael] Treder. “There is a global catastrophic risk that we could all be enslaved.”

Ok, but the reason jobs would be eliminated and this would be “an age of nanotech abudance” would be precisely that the little nanobots would be doing all the work. and the resulting goods would be essentially free. I guess if by “massive social disruptions” you mean everyone skiing and hanging out at the beach instead of working, then yeah, ok, but I doubt that’s going to lead to a worldwide dictator (who, as a reactionary, is probably going to want to force people to go back to work — about as attractive an offer as religious sects that demand celibacy).

Maybe it’s just me, but I’m worried about more abstract possibilites such as a devestating gamma ray burst which would wipe out all of humanity except for Bruce Banner. And there’s always that old standby, entropy.