Nice to see that Western Digital-owned HGST is shipping a 6TB hard drive that uses helium to assist in cramming 6.5TB into a 3.5″ form factor.
According to an HGST press release,
Leveraging the inherent benefits of helium, which is one-seventh the density of air, the new Ultrastar He6 drive features HGST’s innovative 7Stac™ disk design with 6TB, making it the world’s highest capacity HDD with the best TCO for cloud storage, massive scale-out environments, disk-to-disk backup, and replicated or RAID environments.
“With ever-increasing pressures on corporate and cloud data centers to improve storage efficiencies and reduce costs, HGST is at the forefront delivering a revolutionary new solution that significantly improves data center TCO on virtually every level – capacity, power, cooling and storage density – all in the same 3.5-inch form factor,” said Brendan Collins, vice president of product marketing, HGST. “Not only is our new Ultrastar helium hard drive helping customers solve data center challenges today, our mainstream helium platform will serve as the future building block for new products and technologies moving forward. This is a huge feat, and we are gratified by the support of our customers in the development of this platform.”
What’s really wild is HGST’s suggestion that since they are sealed to keep the helium from leaking out, that this could lead to some clever liquid cooling options (emphasis added),
One solution, which has been explored by many, is liquid cooling. Liquid, which is denser than air, can remove heat more efficiently and maintain a more constant operating temperature. However, traditional drives cannot be submerged as they are open to the atmosphere and would allow the cooling liquid inside, damaging or destroying the HDD. HGST’s HelioSeal platform provides the only cost-effective solution for liquid cooling as the drives are hermetically sealed and enable operation in most any non-conductive liquid. Today, HGST is working with leading innovators in this space such as Huawei and Green Revolution Cooling.
Interesting announcement from Sony and Panasonic about collaborating on a new standard for a 300gb optical disc. They hope to have the standard finalized by the end of 2015:
Sony Corporation (‘Sony’) and Panasonic Corporation (‘Panasonic’) today announced that they have signed a basic agreement with the objective of jointly developing a next-generation standard for professional-use optical discs, with the objective of expanding their archive business for long-term digital data storage. Both companies aim to improve their development efficiency based on the technologies held by each respective company, and will target the development of an optical disc with recording capacity of at least 300GB by the end of 2015. Going forward, Sony and Panasonic will continue to hold discussions regarding the specifications and other items relating to the development of this new standard.
These sort of things rarely filter down to the consumer level, but rather tend to fulfill high end data archiving such as might be needed for handling data for a movie.
One possibility, however, is that by the time the standard is complete there could be a demand for something like this for 4K movies.
Researchers at Swinburne University of Technology explain how they invented a clever way to pack a lot more data on optical media–as much as 1,000 terabytes on a DVD that today holds just 4.7 gigabytes.
In our study, we showed how to break this fundamental limit by using a two-light-beam method, with different colours, for recording onto discs instead of the conventional single-light-beam method.
Both beams must abide by Abbe’s law, so they cannot produce smaller dots individually. But we gave the two beams different functions:
The first beam (red, in the figure right) has a round shape, and is used to activate the recording. We called it the writing beam
The second beam – the purple donut-shape – plays an anti-recording function, inhibiting the function of the writing beam
The two beams were then overlapped. As the second beam cancelled out the first in its donut ring, the recording process was tightly confined to the centre of the writing beam.
This new technique produces an effective focal spot of nine nanometres – or one ten thousandth the diameter of a human hair.
It is not quite Charles Stross’ vision of memory diamonds, but it’s good to see research progressing on how we’re going to store all that 4K porn.
I really like Silicon Fornesics’ hard drive transporter for 3.5″ hard drives, but I’ve got 10-12 hard drives stuck in a locking drawer, and the bulk from 10 or 12 of the hard drive transporters would be a bit much. Enter Silicon Forensics’ Hard Drive Shipping Case:
Holds 12 hard drives in a foam padded case suitable for shipping, if you wanted to — though I just want a nice storage solution for a bunch of loose drives.
This thing goes for $129.99 and weighs 8 lbs. I can’t wait to get one.
I knew someone who several years ago wrote a book … on his laptop … for a year … and never backed it up or retained a print copy. You can probably guess what happened next.
Almost as bad are these folks who relied on a cloud-based company to store backups of episodes of the children’s show they produced. One malicious employee later, and (per the Register),
CyberLynk had fired an employee called Michael Scott Jewson and, according to a Honolulu courthouse news report, one month after being given the boot, Jewson accessed CyberLynk servers and wiped out 304GB of data, including 14 Zodiac Island episodes, a full season of the show.
The Zodiac Island producers were based in Hawaii, and Cyberlynk in Wisconsin. A cloud-based service is probably a very good solution for a television production team to share assets among disperse groups all working on a television show, but as a primary backup as well? Seriously?
Especially considering the small size of the dataset involved. Local backup of 304gb would have been dirt cheap. Having a cloud-based backup for convenience or as an alternative in case of a local disaster is a good idea, but I can’t see ever giving up local backups entirely unless the dataset is too large to do so meaningfully (if they were dealing with 100s of terabytes, then maybe I’d understand why they weren’t doing local backups as well, but 304gb…puhleeze).