Storing Hundreds of Terabytes of Data for Billions of Years

The University of Southampton issued a press release recently highlighting progress that scientists there have made on creating digital storage methods that could potentially survive for billions of years.

Using nanostructured glass, scientists from the University’s Optoelectronics Research Centre (ORC) have developed the recording and retrieval processes of five dimensional (5D) digital data by femtosecond laser writing.

The storage allows unprecedented properties including 360 TB/disc data capacity, thermal stability up to 1,000°C and virtually unlimited lifetime at room temperature (13.8 billion years at 190°C ) opening a new era of eternal data archiving. As a very stable and safe form of portable memory, the technology could be highly useful for organisations with big archives, such as national archives, museums and libraries, to preserve their information and records.

The Optoelectronics Research Centre has posted a short video on YouTube showing data being written to such a glass disc using a femtosecond laser writing system.

 

A few years ago, Hitachi was supposedly working on a glass-based data storage system that also etched data onto glass with a laser, although at much lower densities than the Southampton researchers are aiming for,

The company’s main research lab has developed a way to etch digital patterns into robust quartz glass with a laser at a data density that is better than compact discs, then read it using an optical microscope. The data is etched at four different layers in the glass using different focal points of the laser.

. . .

Hitachi said the new technology will be suitable for storing “historically important items such as cultural artifacts and public documents, as well as data that individuals want to leave for posterity.”

. . .

Hitachi has succeeded at storing data 40MB per square inch, above the record for CDs, which is 35MB.

Hitachi has mentioned it’s glass-based research several times since that 2012 announcement, but as far as I know has not shipped anything (probably due to the relatively low data density). In 2014, Hitach announced it had developed a system that could reliably read/write to a 100-layer glass disc.

These glass-based systems remind me of science fiction writer Charles Stross’s idea of using synthetic diamond to store immense amounts of data,

My model of a long term high volume data storage medium is a synthetic diamond. Carbon occurs in a variety of isotopes, and the commonest stable ones are carbon-12 and carbon-13, occurring in roughly equal abundance. We can speculate that if molecular nanotechnology as described by, among others, Eric Drexler, is possible, we can build a device that will create a diamond, one layer at a time, atom by atom, by stacking individual atoms — and with enough discrimination to stack carbon-12 and carbon-13, we’ve got a tool for writing memory diamond. Memory diamond is quite simple: at any given position in the rigid carbon lattice, a carbon-12 followed by a carbon-13 means zero, and a carbon-13 followed by a carbon-12 means one. To rewrite a zero to a one, you swap the positions of the two atoms, and vice versa.

It’s hard, it’s very stable, and it’s very dense. How much data does it store, in practical terms?

The capacity of memory diamond storage is of the order of Avogadro’s number of bits per two molar weights. For diamond, that works out at 6.022 x 1023 bits per 25 grams. So going back to my earlier figure for the combined lifelog data streams of everyone in Germany — twenty five grams of memory diamond would store six years’ worth of data.

Six hundred grams of this material would be enough to store lifelogs for everyone on the planet (at an average population of, say, eight billion people) for a year. Sixty kilograms can store a lifelog for the entire human species for a century.

In more familiar terms: by the best estimate I can track down, in 2003 we as a species recorded 2500 petabytes — 2.5 x 1018 bytes — of data. That’s almost ten milligrams. The Google cluster, as of mid-2006, was estimated to have 4 petabytes of RAM. In memory diamond, you’d need a microscope to see it.

So, it’s reasonable to conclude that we’re not going to run out of storage any time soon.

Faster, please.

Leave a Reply