Using MMOs as a Turing Test

Interesting report at DODBuzz.Com about U.S. military research and development in a number of areas, including “artificial” intelligence. Dr. John Parmentola, Director of Research and Laboratory Management with the Army’s science and technology office, talks about using an MMO as a Turing Test for AIs,

And if you do end up at the Army Science Conference next month, don’t be startled by the three-dimensional holographic image of a soldier talking to you (not that the regenerated arm, mind-controlled computer or implanted memories won’t freak you out enough) as you walk down the hall. It might just be the virtual human Army researchers are creating to make simulators and war games more realistic for training, Parmentola said.

They’re working on creating “photorealistic looking and acting human beings” that can think on their own, have emotions and talk in local slang.

“I actually interact with virtual humans in terms of asking them questions and they’re responding,” Parmentola said.

To test out the computer generated humans’ “humanity,” Parmentola and his researchers want to unleash some of their cyber Soldiers into so-called “massively multi-player online games” such as “World of Warcraft” or “Eve Online” – games frequented by thousands of super-competitive human players in teams of virtual characters fighting battles that can last for days.

“We want to use the massively multi-player online game as an experimental laboratory to see if they’re good enough to convince humans that they’re actually human,” he said.

So someday that 13 year old spamming “this is so fucking gay” in the WoW trade channel might turn out to be SkyNet.

It’s The End of the World As We Know It – Ronald Bailey on Existential Threats

In July, Ronald Bailey wrote several articles for Reason while attending the Global Catastrophic Risks Conference in Oxford. You can read Bailey’s dispatches here, here and here.

The conference was sponsored by Oxford’s Future of Humanity Institute which is run by the always interesting Nick Bostrom. Bostrom opened the conference with perhaps the little bit of good news on existential threats — so far, none of them have come to pass,

The good news is that no existential catastrophe has happened. Not one. Yet.

On the other hand, depending on how far you want to go back to date the first homo sapien or homo sapien-like ancestor, the best explanation for this could be that we simply haven’t been around long enough to face an existential catastrophe. And, of course, the evidence supports the claim that at times the breeding population of our distant ancestors was reduced to extremely low levels.

Bostrom himself noted the debate surrounding the Toba super volcano eruption that some have speculated may reduced the human population to a few thousand people, though there is also some evidence that the reduction in population may not have been quite that severe.

According to Bailey, Bostrom argued the biggest existential threats facing humanity are self-induced,

Bostrom did note that people today are safer from small to medium threats than ever before. As evidence he cites increased life expectancy from 18 years in the Bronze Age to 64 years today (the World Health Organizations thinks it’s 66 years). And he urged the audience not to let future existential risks occlude our view of current disasters, such as 15 million people dying of infectious diseases every year, 3 million from HIV/AIDS, 18 million from cardiovascular diseases, and 8 million per year from cancer. Bostrom did note that, “All of the biggest risks, the existential risks are seen to be anthropogenic, that is, they originate from human beings.” The biggest risks include nuclear war, biotech plagues, and nanotechnology arms races. The good news is that the biggest existential risks are probably decades away, which means we have time to analyze them and develop countermeasures.

In his final dispatch from the conference, Bailey reported on Joseph Cirincione who spoke at the conference and noted how human civilization almost ended in 1995 due to, of all things, a Norwegian weather satellite,

With regard to the possibility of an accidental nuclear war, Cirincione pointed to the near miss that occurred in 1995 when Norway launched a weather satellite and Russian military officials mistook it as a submarine launched ballistic missile aimed at producing an electro-magnetic pulse to disable a Russian military response. Russian nuclear defense officials opened the Russian “football” in front of President Boris Yeltsin, urging him to order an immediate strike against the West. Fortunately, Yeltsin held off, arguing that it must be a mistake.

Cirincione noted that worldwide stockpiles of nuclear weapons have been reduced dramatically since the end of the Cold War, and the possibility for a worldwide disarmament of nuclear weapons is higher than at any time since 1945.

Bailey also reports on a few folks who presented the view that a strong AI and/or nanotechnology present serious existential risks, but the arguments presented there (at least as filtered through Bailey) seemed shallow,

In addition, an age of nanotech abundance would eliminate the majority of jobs, possibly leading to massive social disruptions. Social disruption creates the opportunity for a charismatic personality to take hold. “Nanotechnology could lead to some form of world dictatorship,” said [the Center for Responsible Nanotechnology’s Michael] Treder. “There is a global catastrophic risk that we could all be enslaved.”

Ok, but the reason jobs would be eliminated and this would be “an age of nanotech abudance” would be precisely that the little nanobots would be doing all the work. and the resulting goods would be essentially free. I guess if by “massive social disruptions” you mean everyone skiing and hanging out at the beach instead of working, then yeah, ok, but I doubt that’s going to lead to a worldwide dictator (who, as a reactionary, is probably going to want to force people to go back to work — about as attractive an offer as religious sects that demand celibacy).

Maybe it’s just me, but I’m worried about more abstract possibilites such as a devestating gamma ray burst which would wipe out all of humanity except for Bruce Banner. And there’s always that old standby, entropy.