Cory Doctorow’s Devastating Rant Against Andrew Orlowski

Cory Doctorow recently quit his job and poured a little of that extra time he’s got to write this devastating attack against the world’s worst pseudo-journalist, The Register’s Andrew Orlowski. Orlowski is so wrong so often you have to wonder if The Register doesn’t encourage him to make gross errors of fact simply because the resulting controversy leads to more page views for the online rag. Orlowski’s the sort who will lambaste Google over what turns out to be a misspelling that he couldn’t be bothered to check.

In late December, Orlowski ran one of his fact-free articles in which he claimed that Doctorow posted thing on Wikipedia related to his own Wikipedia entry that Doctorow had, in fact, never posted,

Orlowski put me in the vain and foolish camp because I had taken part in a discussion of my entry in which I spoke of myself in the third person, e.g. “‘Since these issues are inextricably linked to the way Doctorow has chosen to present his books to the world, I do think it is at least somewhat appropriate,’ Doctorow adds.” He also implied that this somehow tricked Wikipedia’s volunteer moderators into letting me correct the record where others had been denied.

He’s at least part right — people who talk about themselves in the third person do look pretty foolish. But he was completely wrong on the factual assertion that I had talked about myself in the third person, and so his speculation that this was the magic trick necessary to allow people to edit their own entries was invalid.

I had indeed taken part in the message-board for my Wikipedia entry, and some months later, a Wikipedia editor reorganized the page, grouping the discussions by topic. To an untrained eye it was unclear who had written what, and if you hold the kind of low opinion of me that Orlowski clearly has, it might be possible to believe that the entire message board had been written by me alone.

The worst thing is how the Register chose to correct the errors once pointed out. They simply removed part of the nonsense that Orlowksi had written as if it had never been there. No note that the page had been altered due to errors that Doctorow brought to their attention, and certainly no apology for the Register’s continued employment of someone who makes Nigerian 419 scammers look like dedicated truthseekers.

Doctorow is wrong, however, in claiming that given the two, Wikipedia’s sort of errors are superior to Orlowski’s sort of errors,

Wikipedia’s transparent approach to the truth lays out all sides of the debate where all can see them and judge for themselves what the fact of the matter is. The Register’s approach hides the negotiation of truth behind invisible, silent edits, and behind the whims of writers who are free to correct, (or not correct) the record as they see fit.

I couldn’t disagree more. When I visit a Wikipedia page, I have no idea about who wrote any given assertion nor how reliable that person is. It could be the most accurate article ever written on the topic, or it a piece of self-serving garbage, but it is very difficult for me as a casual Wikipedia user to ascertain authorship or reliability.

With Orlowski, however, people who encounter his work a few times know that pretty much everything he writes is bogus. Similarly, someone can visit this blog, look around, and form an opinion about how reliable they think I am. By eliminating any sort of easily traceable authorship, an important clue to how accurate the article is becomes unavailable.

Cory Doctorow’s DRM Talk for Hewlett-Packard

DRM Talk for Hewlett-Packard Research

Corvalis, Oregon

Cory Doctorow

European Affairs Coordinator, Electronic Frontier Foundation

www.eff.org

[email protected]

9/28/5

This text is dedicated to the public domain, using a Creative
Commons public domain dedication:

> Copyright-Only Dedication (based on United States law)
>
> The person or persons who have associated their work with this
> document (the “Dedicator”) hereby dedicate the entire copyright
> in the work of authorship identified below (the “Work”) to the
> public domain.
>
> Dedicator makes this dedication for the benefit of the public at
> large and to the detriment of Dedicator’s heirs and successors.
> Dedicator intends this dedication to be an overt act of
> relinquishment in perpetuity of all present and future rights
> under copyright law, whether vested or contingent, in the Work.
> Dedicator understands that such relinquishment of all rights
> includes the relinquishment of all rights to enforce (by lawsuit
> or otherwise) those copyrights in the Work.
>
> Dedicator recognizes that, once placed in the public domain, the
> Work may be freely reproduced, distributed, transmitted, used,
> modified, built upon, or otherwise exploited by anyone for any
> purpose, commercial or non-commercial, and in any way, including
> by methods that have not yet been invented or conceived.

Note: this essay is derived from notes for an invited talk to HP
Research on DRM. The talk was not delivered verbatim, nevertheless,
this is a good feel for what I said that day. For the text of an
earlier talk on this subject delivered to Microsoft Research, see
http://craphound.com/msftdrm.txt .

The canonical version of this talk live at
http://craphound.com/hpdrm.txt .

Alternate html version here (thanks, Branko Collin!):
http://www.xs4all.nl/~collin/test/hpdrm.html

I work for the Electronic Frontier Foundation, a member-supported
charitable organization that works to uphold the public interest in
technology law, policy and standards. For nearly four years, I’ve
spent my time attending DRM standards meetings, consortia, and treaty
meetings at the United Nations. In that time, again and again, I’ve
seen tech giants like HP take suicidal measures to voluntarily cripple
their products to make them more palatable to a few entertainment
companies, even though this measure makes them less palatable to
virtually all of your paying customers.

Nothing epitomized this more than Carly Florina’s inaugural CES
address in which she promised to put DRM in every HP product. Reading
that in my office in San Francisco (I live in London now), I thought,
well, hell, I guess I’m not buying any more HP products. I’m pretty
sure I’m not the only one.

I’ve had innumerable conversations with engineers, lawyers and execs
about DRM, but it’s rare that I get the chance to systematically
explain how DRM fails as a technology, as a moral proposition, and as
a commercial initiative. I’m grateful that HP has given me that chance
today. I’m looking forward to your questions after my talk.

Now, onto the talk, in which I will try to address the security, moral
and commercial aspects of DRM.

THREAT MODELS

There is no such thing as “security” in the abstract. You can’t be
made “secure.” You can only be made “secure” *against a specific
attack*. All security discussions must begin with an analysis of a
threat and a proceed to address that threat with countermeasures.

In discussions of DRM, radically different threat-models are usually
conflated to sow confusion and to disguise the implausibility of DRM.
In the paper at hand (as in many other cases), privacy-protection is
conflated with use-restriction. But these have totally different
threat-models:

* Privacy

In privacy scenarios, there is a sender, a receiver and an attacker.
For example, you want to send your credit-card to an online store. An
attacker wants to capture the number. Your security here concerns
itself with protecting the integrity and secrecy of a message in
transit. It makes no attempt to restrict the disposition of your
credit-card number after it is received by the store.

* Use-restriction

In DRM use-restriction scenarios, there is only a sender and an
attacker, *who is also the intended recipient of the message*. I
transmit a song to you so that you can listen to it, but try to stop
you from copying it. This requires that your terminal obey my
commands, even when you want it to obey *your* commands.

Understood this way, use-restriction and privacy are antithetical. As
is often the case in security, increasing the security on one axis
weakens the security on another. A terminal that is capable of being
remotely controlled by a third party who is adversarial to its owner
is a terminal that is capable of betraying its owner’s privacy in
numerous ways without the owner’s consent or knowledge. A terminal
that can *never* be used to override its owner’s wishes is by
definition a terminal that is better at protecting its owner’s
privacy.

THE DRM THREAT MODEL

The threat model for DRM is that an unscrupulous user will be able to
download an asset for free from the Internet instead of going through
a conditional access billing gateway. Additionally, DRM seeks to give
rightsholders the ability to restrict the use of assets after receipt
to enforce restrictions that are not related to copyright (e.g. remote
viewing, region-control).

A service operator can ensure that 100 percent of the assets behind
her conditional access system are wrapped with DRM, which means that
everyone who uses the system will receive media that is locked with
DRM. The system fails not when the DRM is cracked, but when a user
gains access to a non-DRM file, or when a user does not pay for
access.

Every file that is locked with DRM inside a conditional access system
is also available on the public Internet without DRM. In order for DRM
to be effective, a user must first freely choose to acquire the DRM
version over the non-DRM version.

The presence of DRM *cannot* entice a user to make use of the
conditional access system to acquire his media. Indeed, DRM acts as a
disincentive (there is no user who woke up this morning crying out for
a way to do less with her music). Where users buy DRM-locked files, it
is *in spite of* the DRM, or in ignorance of the DRM, but never
*because* of the DRM.

A familiar refrain from rightsholders is that “you can’t compete with
free.” It is certainly true that when your costly product is inferior
(because of use-restrictions) to the free alternative, it will be hard
to compete with free.

In the DRM world, security is breached so long as there is any person
with the wherewithal to make a cleartext copy of an asset and put it
on the Internet. In practice, this happens with amazing swiftness. Big
Champagne, a company that monitors P2P networks, says that iTunes-only
tracks (e.g. assets that are only released within DRM wrappers)
typically appear on P2P networks less than three minutes after they
are released to the iTunes Music Store.

To succeed in an attack against a DRM system, a user need not know how
to break DRM, she only needs to know how to search Google or another
general-purpose search tool for a copy that someone else has already
rendered in the clear.

THE DRM FOR PRIVACY THREAT MODEL

The privacy threat model generally revolves around accidental
disclosure and subsequent publicity. A common example of privacy
breach is an unscrupulous hospital worker who discloses the identities
of HIV-positive patients.

It is suggested that an iTunes Music Store-like model could defend
against this attack: a conditional access system restricts access to a
health record unless a valid credential (e.g. a password or smartcard)
is presented. A DRM system allows for later revocation of access once
it has been granted. However, as Don Marti points out, this is poor
security indeed:

“Deploy DRM and you can keep employees from forwarding
embarrassing email to the media. That sounds like the answer to
network-illiterate managers’ prayers, but if it’s juicy enough to
leak, it’s juicy enough to write down and retype…. Bill Gates
pitch[ed] DRM using the example of an HIV test result, which is
literally one bit of information. If you hired someone
untrustworthy enough to leak that but unable to remember it, you
don’t need DRM, you need to fix your hiring process.”

Don Marti, editor in chief, Linux Journal

Privacy almost always includes an element of personal/political power.
Children want to be private from their parents. Employees want privacy
from their bosses. Political dissidents want privacy from the Chinese
secret police.

For “privacy DRM” to work, the defender needs to be in a position to
dictate to the attacker the terms on which he may receive access to
sensitive information. For example, the IRS is supposed to destroy
your tax-records after seven years. In order for you to use DRM to
accomplish the automatic deletion of your records after seven years,
you need to convince the IRS to accept your tax records inside your
own DRM wrapper.

But the main reason to use technology to auto-erase your tax-records
from the IRS’s files is that you don’t trust them to honor their
promise to delete the records on their own. You are already
adversarial to the IRS, and you are already subject to the IRS’s
authority and in no position to order it to change its practices. The
presence or absence of DRM can’t change that essential fact.

This is a classic “who will bell the cat?” problem. Inventing new and
better-functioning bells doesn’t make getting them attached to the
cat’s collar any easier.

DRM AND NON-COPYRIGHT POLICY ENFORCEMENT

Many of the restrictions that DRM is used to enforce are unrelated to
copyright, and no DRM system can accurately model copyright, which is
highly fact-specific.

Copyright is a limited monopoly over the public copying, performance,
display and adaptation of original works. Copyright governs the
ability of commercial entities and a few noncommercial entities to
make copies, display them, etc.

Copyright does *not* confer the right to control “remote viewing” —
the ability to store a show in one place and watch it in another. It
does *not* confer the right to control timeshifting. It doesn’t confer
the right to control regional playback, as with DVDs that can only be
viewed on a US player or a European players. Copyright does *not*
confer the right to control re-sale or lending of lawfully acquired
works.

Copyright is used to extend the creator’s monopoly into all kinds of
realms, though. Take the so-called “Authorized Domain”, a trendy DRM
concept that confers on rightsholders the right to define valid
familial arrangements, something so far remote from copyright as to be
in an entirely different universe. In venues where the Authorized
Domain is being planned, designers are torn between two different
potential implementation models, both of which are totally
unacceptable:

* Hard limits on domain size

Only so many devices may join the domain (as with Apple’s five-device
authorization limit for iTunes). This has many unacceptable failure
modes, including the inability to deactivate lost, stolen or damaged
devices, as well as arbitrary limits on family size.

* Multi-test limits on domain size

In this model, a series of tests are applied, including tests for
proximity, tests for existing domain size, strategies for
re-accumulating domain credits, and proprietary tests. These tests are
logically represented on flowcharts that no end-user or retailer can
possibly understand (especially given the presence of proprietary
tests). Any customer who asks a retailer, “Will this device be able to
join my domain?” will inevitably get the answer: “maybe.”

Most unacceptable is the presence of “corner cases” like divorced
families with joint custody arrangements among several children, whose
devices may be restricted from belonging to more than one domain, or
blended households created in extremis (your father being sent to an
old folks’ home, your daughter moving into a student house), that are
surely households, even if they are not traditional families, and that
may fail the tests on domain size.

DRM AS A NEGOTIATION

DRM is often characterized as the outcome of a negotiation: “You may
have access to my song if you accept my restrictions.” But DRM always
gives rightsholders the ability to unilaterally renegotiate the terms
of the deal to take away rights you acquired when you got your device
and media.

For example, many updates to iTunes contain new restrictions on the
music you purchase. In the past 18 months, iTunes has instituted the
following new restrictions:

* Music can no longer be streamed to your computers wherever they are
— now they can only be streamed to computers on your LAN (no more
listening to your home music server while you’re at the office)

* Music can no longer be streamed to any number of people on your LAN
— now you can only stream music to a maximum of five people per 24
hours. If your friends tune in for ten seconds of music and then tune
away, that eats up one of your 24-hour slots.

* Playlists can no longer be burned 10 times — now they can only be
burned seven times.

* The iTunes API will no longer respond to all the apps you download
to increase iTunes’ functionality — now iTunes contains a blacklist
of apps whose API calls are silently discarded, as punishment for
adding functionality that Apple doesn’t care for.

You buy a song on day one and can do ten things with it. A few weeks
later, you can only do nine things with it. Then eight. Then seven.

Last week, many TiVo owners discovered that several of the free-to-air
and cable shows they received with their PVRs could not be saved
indefinitely, and would be automatically deleted after a set period.

Last year, Comcast PVR owners discovered that all their stored
episodes of Six Feet Under were deleted a few weeks before the DVD
came out.

The right to store your music and movies, the right to watch your
movies in any country you find yourself in, the right to timeshift and
space-shift, the right to re-sell, the right to loan, the right to
share your media with your family regardless of your familial
arrangements — these rights all belong to the public. Copyright law
reserves these rights from control by rightsholders.

DRM is a mechanism for unbalancing copyright, for betraying the
statutory limitations on copyright, for undermining the law itself. By
granting rightsholders the ability to unilaterally confiscate public
rights under copyright, DRM takes value out of the public’s pocket and
delivers it to rightsholders.

When you acquire a car, you acquire the right to charge your phone off
its cigarette lighter. No car owner has to assign that right to you.
Even if the car manufacturer thinks it can make big bucks by selling
the exclusive right to charge phones in its car to Nokia, nothing
prevents you from charging your Motorola phone from the lighter.

More complex are the rights reserved to the public under the banner of
fair use. Fair use is the copyright doctrine that allows users to make
uses *even if the rightsholder objects*. For example, critics,
parodists, educators, archivists and disabled people all have certain
rights to use copyrighted works without the permission from
rightsholders. In order for a DRM system to permit you to extract some
video for the purposes of making a parody, but stop you from doing
this for the purposes of burning the movie to a CD and selling it on
eBay, the DRM system has to be capable of reading your mind and
determining why you want to make your use.

The gradual tightening of DRM screws will alienate ever-larger groups
of customers. There are some who believe that if you turn the heat up
gradually enough, the customer will never notice that she has been
boiled. History suggests otherwise. The repeated disastrous attempts
to introduce DRMed CDs into the marketplace tells us once a customer
is accustomed to a use, she is unlikely to accept a product that
restricts it.

WHAT HP SHOULD DO

HP is under no obligation to play by the entertainment industry’s
rules in order to gain access to content. Format-shifting,
time-shifting and space-shifting are legal practices with long and
honorable traditions (indeed, Apple’s own iTunes software contains a
mechanism to format- and space-shift your CDs by ripping them to MP3,
as does Microsoft’s Media Player).

However, when tech companies seek a closer relationship with the
entrainment industry, they find themselves in the position of having
to offer means for restricting the use of their products in ways that
the market generally rejects — no end-user buys products because of
their DRM.

The worst-case scenario is to end up in a situation like the
Blu-Ray/DVD-HD wars. The two consortia responsible for these competing
formats are competing to please the entertainment industry by adding
more and more onerous restrictions to their technologies, restrictions
that raise the manufacturing costs while reducing the commercial
viability of their products.

HP need not follow this disastrous strategy. Practically every device
in the field has one or more analog outputs. It is both possible and
legal to connect digital recording devices to these outputs and make
legal near-perfect digital copies that can be played back and
manipulated on devices without Hollywood’s blessing. Devices such as
the Slingbox, the Orb, and Mythtv all do this today.

These devices play perfectly to the core strengths of the tech and
telecoms industry. PC vendors who provide flexible set-top boxes that
ease the pain of recording and librarying AV material will create
markets for ever-more-capable set-top boxes that have larger and
larger storage capacities, as well as backup solutions, service and
troubleshooting, etc.

A WORD ON TRUSTED COMPUTING

Current models for trusted computing conflate many features that are
useful to the user with many that undermine user privacy, investment
in content, and data-integrity.

On the positive side, trusted computing allows for superior
countermeasures against spyware and other malicious software. It
contains crypto accelerators that safeguard communications integrity
and secrecy. It eases the pain of managing end-to-end crypto for
private communications.

On the negative side, trusted computing can enforce policies against a
user’s wishes. Trusted computing can be used to block the use of
interoperable products (e.g., to force a user to use Internet Explorer
instead of Mozilla by allowing remote parties to reliably distinguish
among the two), and to block or complicate the backing up or migration
of user data. Additionally, trusted computing can be used as a
superior enforcement mechanism for DRM restrictions, particularly
those that seek to unilaterally renegotiate the terms under which
content is acquired.

This need not be. “Owner override” is a conceptual model for modifying
trusted computing hardware to retain all of its user benefits while
eliminating the dangers posed by allowing a device to enforce policy
against its owner’s wishes.

For more information on “owner override” please see Electronic
Frontier Foundation Staff Technologist Seth Schoen’s excellent paper
on the subject:

http://www.eff.org/Infrastructure/trusted_computing/20031001_tc.php

Owner Override works by empowering a computer owner, when
physically present at the computer in question, deliberately to
choose to generate an attestation which does not reflect the
actual state of the software environment — to present the
picture of her choice of her computer’s operating system,
application software or drivers. Since such an attestation can
only be generated by the computer owner’s conscious choice, the
value of attestation for detecting unauthorized changes is
preserved. But the PC owner has regained fine-grained control,
even in a network environment, and the PC can no longer be
expected to enforce policies against its owner. Owner Override
removes the toolbox that allows the trusted computing
architecture to be abused for anti-interoperability and
anti-competitive purposes. It restores the important ability to
reverse engineer computer programs to promote interoperability
between them. Broadly, it fixes trusted computing so that it
protects the computer owner and authorized users against attacks,
without limiting the computer owner’s authority to decide
precisely which policies should be enforced. It does so without
undermining any benefit claimed for the TCG architecture or
showcased in Microsoft’s public NGSCB demonstration. And it is
consistent with TCG’s and most vendors’ statements about the
goals of trusted computing.

CONCLUSIONS

I can hardly fault HP for embracing the received wisdom on DRM.
However, the received wisdom is rarely a path to commercial success.
In the global marketplace, HP has numerous competitors, from giants to
smaller, nimbler firms — and if any company has an appreciation of
the potential of two guys in a garage, it should be this one.

The question isn’t *whether* one of these companies will defect from
the DRM game, but *when*. The first to market with better, more
powerful, more capable devices will emerge the clear winner.

I don’t believe HP can afford to sit tight and hope that the unspoken
agreement not to anger Hollywood will hold.

eof

Of Course Dave Can Modify Cory’s Content

Dave Winer has set up a silly straw-man in his ongoing complaints about utilities like Greasemonkey and the Google Toolbar that let the user modify content that appears in their browser. Dave wonders if he can do the same thing with Cory Doctorow’s CC-licensed books.

And the answer is that, of course he can. If Dave wants to download Cory’s books and edit them so he appears as the author, that’s completely within his rights. He could remove words, add words, do whatever he wants to the text in his browser or text editor, just as users can use Autolink or Greasemonkey to make any sort of modifications they want to text appearing in their browser.

But Dave sets up a straw man when he wonders if he then would be able to set up shop and sell said modified novel. Well, of course not, except within the terms of the CC licenses Cory uses. Reproducing and redistributing said modified content is a completely different issue. Neither Autolink nor Greasemonkey, after all, allow the user to hit a button and publish their modified content to a website. There’s nothing there that says take the content you’ve modified and redistribute it to other users.

Whether or not I should be able to create a filter that replaces all instance of “Dave Winer” with some expletive is a completely separate issue from whether or not I should be able to wholesale copy and paste all the content from Scripting.Com onto my blog and pretend that I wrote it.

It is worth noting, however, that Winer regularly reproduces people’s content in the form of screen shots which he then annotates or otherwise modifies, so its a bit odd to see him suddenly attacking the ability to reproduce, modify and republish content that he doesn’t own. Did he receive permission from Yahoo 360, for example, for reproducing, modifying and redistributing this screen shot? But I thought publishers had absolute rights to never have their content modified by others without their permission?

Cory Doctorow’s Problems with American Airlines

Cory Doctorow posted last week about a bizarre encounter with American Airlines in which an AA attendant tried to convince him that a Transportation Security Agency regulation required him to write out a list of the names and addresses of everyone he planned to stay with while in the United States (if I remember correctly, Doctorow is a Canadian national who has recently been working in the UK for the Electronic Frontier Foundation).

Then this guy writes a letter to AA to see if this is true and gets the following response from AA spokesman Tim Wagner,

That said, our contracted screener veered from standard procedure when she asked for Mr. Doctorow to write the addresses of his destinations in the United States. She did clearly state that once the interview was completed, the address list would be destroyed in front of Mr. Doctorow or that he could have the list to keep. American Airlines absolutely does not register or record that type of personal data.

Although the agent concerned is very promising, this incident clearly showed a lack of experience in the questioning process. The agent will go through additional training and supervision. Through daily briefings, the remainder of the station will benefit from the experience gained from this incident.

Doctorow replies that,

At no time did the screener or her supervisor ever state that the list would be destroyed in front of me, nor that I could keep the list. In fact, all three AA security people I dealt with — the screener, her supervisor and the terminal manager — told me that they didn’t know what would be done with the list after the interview, that they had no idea what AA’s document-retention and data-privacy policies were

The AA response doesn’t pass the smell test. What would be the point of asking a passenger to write out the names and addresses of who Doctorow is staying with in order to simply destroy that list in front of him?

Source:

Cory Doctorow and Secondary ‘Secondary Screening’ Classes. SecondaryScreening.Net, January 21, 2005.

Why is American Airlines gathering written dossiers on fliers’ friends? Cory Doctorow, Boing! Boing!, January 19, 2005.

Now They’re Promoting Conspiracy Theories Over at Boing! Boing!

As I’ve mentioned previously, I really enjoy Boing! Boing! but sometimes that blog just goes off into loony insanity. For example, Cory Doctorow points to this silly conspiracy nonsense claiming that Nicholas Berg was killed by Westerns who are trying to frame al- Zaqarawi for the murder (probably the same people who faked the Daniel Pearl video) and Doctorow says of this,

The author states that a number of these will likely be explained away, but taken as a whole, this very convincingly implies that Berg was not killed by the terrorists that the CIA fingered, and may, in fact, have been killed by westerners.

For the record, note that this sentence here is erroneous because Doctorow apparently takes the conspiracy piece at face value when it says,

For a number of reasons, it does not appear that the Jordanian terrorist Abu Masab Al-Zaraqawi, who was voice identified by the CIA (and whose name was on the tape), was involved.

But the CIA has not made an official statement about whether or not Al-Zaraqawi is on the tape as the tape itself claims. Rather, newspapers have quoted an anonymous CIA source as claiming that a voice match suggested there was a “high probability” that the voice was that of Al-Zaraqawi.

Boing! Boing! — Talking to Your Children is McCarthyism!

Almost as amusing is watching the RIAA flailing around trying to find a way to stop file sharing is the ridiculous rhetoric that some web sites use to describe the RIAA’s tactics. For example, here’s Cory Doctorow on an advisory for parents that the RIAA released about file sharing (emphasis added),

The RIAA is sending out advisories to press-contacts at various media outlets about their “Are Your Kids Breaking the Law When They Log On?” campaign, which aims to scare parents into spanking their kids for file-sharing, and comes across as red-scare-era propaganda. It’s funny: Hollywood fought the Red Scare and McCarthyism tooth and nail, but today, they’re more than happy to appropriate its rhetoric and tactics.

Yes, the RIAA advisory notes that 83 percent of teens believe illegally downloading music is morally acceptable and give parents advice like,

SET HOUSE RULES AND SPELL OUT THE CONSEQUENCES OF NON COMPLIANCE. As you consider the potential consequences of illegal file swapping and the danger to your computer you can limit access to illegal sites through parental control software like Cybersitter or NetNanny or through parental controls in AOL or MSN. You can take away Internet privileges for a set amount of time if you feel your child is not obeying the rules.

Oh the horror. How will the Union ever withstand such a shocking outburst of intolerance and McCarthyism?

The only scare tactics here are Cory’s ridiculous overreach in referencing McCarthyism (it’s exactly this sort of thing that helps content companies sell things like the DMCA to Congress and the wider public. It just helps prove the RIAA’s case that those wanting to find a more reasonable solution to the copyright issue are fanatics).

You can read the full text of the RIAA advisory here on Cory’s site.