film & video
filters, walls and tunnels
page explores filters, labels, AVS and other technologies
that enable restriction of access to content at the point
of reception rather than distribution. It also highlights
some technologies aimed at tunnelling through those barriers.
It covers -
site features supplementary notes on Digital Rights Management
(DRM) schemes and Age Verification
(AVS) schemes. Blocking
of online advertisements is explored here.
As we have highlighted in the Network & GII guide
and Domain Name System profile
on this site, the internet is a network of networks that
involves the retrieval of files (for example html pages
and electronic mail) from one device - typically a server
- by another device, which might be an individual's computer,
a workstation on a corporate network, a PC within a library
network or even a mobile phone.
Accordingly, it is possible to restrict access from a
specific machine or from a group of machines (eg all/part
of a corporate network or all subscriptions to a particular
internet service provider). That restriction - often characterised
as blocking or filtering - is one of the most contentious
subjects in debate about the regulation of cyberspace.
Perceptions about what can/should be published - and read
- differ widely. There is little agreement within the
electronic 'global village' about what content should
be restricted and less agreement about appropriate sanctions
or commitment to enforcement in dealing with questions
of pornography, hatespeech,
politics and consumer protection.
Restriction at the point of reception rather than origin
is thus attractive. Paul Resnick, whose work is noted
below, comments that
inappropriate materials at their source is not well
suited to the international nature of the Internet,
where an information source may be in a different legal
jurisdiction than the recipient. Moreover, materials
may be legal and appropriate for some recipients but
not others, so that any decision about whether to block
at the source will be incorrect for some audiences.
at the point of reception takes three forms.
The first and crudest restricts all access to the net.
In 2006 for example the Ministry of Communications &
Information in Belarus published new laws banning home
access to the internet, backed by heavy fines or prison
terms for violators.
A more sophisticated restriction has involved 'stop-lists'
of addresses, with the individual machine or the gateway
on the corporate/institutional network being set so as
not to find files from those locations in cyberspace.
The size, growth and volatility of the net (highlighted
here) means that such
lists cannot be comprehensive, ie cannot cover all sites
and cannot be up to date.
The thirs method of restriction relies on characteristics
of the files rather than the point in cyberspace from
which they originate. Those characteristics include
type of file (for example audio and video content)
applied by the content creators/hosts
specific language or images within files
2002 Australian government Effectiveness of Internet
Filtering Software Products study (PDF)
concentrates on technologies rather than broader policy
issues - highlighted below - but provides a valuable introduction
to filtering and labelling.
The study notes that
it is technically feasible to block access to all undesirable
internet content, no internet blocking or filtering
scheme will ever be 100% effective, or resist a determined
and informed attacker, but many of them will be perfectly
adequate in normal use.
filtering is a difficult problem. Even text-based filtering
requires some ability to determine context (and meaning)
for words they discover. Early products were infamous
for simplistic filtering, with the blocking of "breast"
cancer content being the most quoted example. Filtering
products have improved since those early days but the
task is still very difficult and moderately high error
rates can be expected. Filtering out non-textual information,
such as photographs or video, is much more difficult
All filtering technologies are fallible, and the more
effective they are, the more they risk intruding on
general Internet usage. Products have to strike a balance
between filtering out undesirable content, and allowing
access to (possibly unknown) useful content. The white
list products are the most effective because they are
the most restrictive and constrain users to a very small
part of the Internet.
Much attention is paid to filtering web pages but undesirable
content can be found in many places on the Internet,
including newsgroups and file servers. Some of the more
tightly filtered internet services, such as some of
those designed for the educational market, resolve this
problem by completely blocking access to all internet
services other than the Web and email
conclusions are consistent with the 2002 US National Academies'
on Youth, Pornography & the Internet noted
earlier in this guide, the 2003 Children’s Internet
Protection Act: Report on the Effectiveness of Internet
Protection Measures and Safety Policies report
by the US National Telecommunications & Information
Administration, the thinner Effectiveness of Internet
Software Filtering Products (PDF)
from the Australian government's NetAlert, Richard Clayton's
Failures in a Hybrid Content Blocking System (PDF)
and the important Access Denied: The Practice and
Policy of Global Internet Filtering (Cambridge: MIT
Press 2008) edited by Ronald Deibert, John Palfrey, Rafal
Rohozminski & Jonathan Zittrain.
A 2008 NSW Parliament note (PDF)
considers claims that mandatory blocking of content is
principles and performance
Leaving aside questions of principle, filtering technologies
pose several concerns -
over-block, ie exclude access to 'legitimate' content
under-block, ie do not exclude access to 'illegitimate'
are only as good as the the information on which they're
based (the volatility of the web means for example that
what was a licit site some time ago - and identified
as such - might now be illicit
hype from promoters, artificial intelligence is insufficiently
advanced to consistently determine 'on the fly' whether
a graphic is offensive and thus prevent its display online.
(We have explored some technical issues here.)
Most censorship technologies instead attempt to block
access to specific sites, generally using a labelling
Sites are examined, manually or automatically, to determine
whether they should be labelled.
That examination encompasses whether the domain name or
metadata includes terms
deemed offensive, a process that's problematical since
many sites do not have extensive metadata. It also includes
scrutiny of whether text, graphics, audio or video within
the site meet the examiner's criteria.
'Objectionable' sites/pages are then labelled, either
on the site by its owner or in an independent list. Browsers
can be modified to recognise those labels and thus restrict
access to specific sites.
Practice is, alas, more problematical. There is no global
requirement that sites be labelled by their owners or
agreement about labelling criteria. Operators of sites
deemed offensive by a particular market or government
generally respond by moving to a domain in a less restrictive
jurisdiction or ignoring a rating.
Understandably, as the number of pages on the web increases,
manual rating is failing to keep pace with growth (eg
probably covers less than 1% of domains). There's disagreement
about whether past ratings have been kept up to date.
Critics argue that while rating per se
is not contentious, the way in which it's been administered
by particular companies and associations suggests considerable
scope for abuse.
In Australia, for example, the sites of various parliaments,
government agencies and advocacy groups have been blocked
(the federal parliament site contains words such as 'whip').
Similar problems are evident in consumer 'seals' programs,
discussed in our Consumers
guide, and in past initiatives such as the US V-Chip plan
for blocking offensive television broadcasts.
Many of the filters, blocking mechanisms and other
content management regimes are based on the Platform for
Internet Content Selection (PICS),
a metadata-based standard for internet content that's
discussed in more detail
in the Metadata profile on this site.
PICS was developed in association with the World Wide
Web Consortium as part of that body's interest in the
'architecture' of the internet. It is described in
by Paul Resnick & Miller on PICS: Internet Access
Controls Without Censorship and in Resnick's PICS,
Censorship, & Intellectual Freedom FAQ (here).
Despite W3C endorsement it has never really got off the
ground and was for example damned by influential polemicist
Lawrence Lessig in his Tyranny In The Infrastructure
in WIRED. It provides for tagging of web pages,
eg allows them to be labelled as containing violent or
sexually-explicit material and thereby excludes access
from particular browsers. It does not specify the nature
of the labels or their derivation.
PICS is a building block for the Recreational Software
Advisory Council (RSAC) rating scheme administered by
the Internet Content Rating Association (ICRA),
an industry body concerned with the invidious task of
developing a viable content 'advisory' scheme, alerting
surfers that there may be something unpleasant in the
RSAC traces its origins to a 1994 video
games industry Game Ratings Working Group headed by
Sega and Nintendo, in opposition to the Entertainment
Software Rating Board (ESRB)
regime developed by the Interactive Digital Software Association
ICRA has received some degree of endorsement from the
EU, along with the inevitable denunciations from zealots
who regard any content identification tool as tantamount
to book burning. The 2000 report
of the ICRA Advisory Board, drawing on the 'Best Practices'
developed by the Information Society Project (ISP)
at Yale's Law School, was construed by some as 'back to
the drawing board'.
In December 2000 ICRA released a more sophisticated rating
framework with endorsement by the CDT, arguably a major
step forward. In February 2001 that framework was extended
to several languages other than English.
ICRA-rate sites are identified using its trustmark.
As noted earlier in this guide, content labelling
is problematical: it is frequently hit and miss, it is
consistently over-hyped, and endorsement by government
in Australia and overseas is ill-founded.
Filters lauded in Australia have excluded the most benign
of sites while permitting access to those with 'explicit'
content. Studies of the "state-of-the-art"
BAIR filter - endorsed alas by the Commonwealth government
- in June 2000 demonstrated that while it excludes access
to images of dogs, journalists, trees and vegetables it
rates images of group and oral sex as acceptable fare
for the kids.
The 1999 report Filters & Freedom by the Electronic
Privacy Information Center (EPIC)
is a useful starting point in understanding the technology.
like that of the Computer Professionals For Social Responsibility
contains a range of information about 'censorware'. The
Internet Law & Policy Forum (ILPF), a business advocacy
body, published a report
on Content Blocking in 1997. Another document of
value is the COPA Commission's final report,
which drew on the more detailed study
on Risk & the Internet: Perception and Reality
by Christopher Hunter & Eric Zimmer.
The latter reflected Hunter's earlier thesis
- Filtering the Future?: Software Filters, Porn, PICS,
and the Internet Content Conundrum - and paper
on Internet Filter Effectiveness: Testing Over &
Underinclusive Blocking Decisions of Four Popular Filters.
In June 2000 the EU released the final report
of the Internet Content Rating For Europe (INCORE)
project, exploring a pan-European content rating and filtering
regime. The report was more cautious than material from
the Australian government.
The EU's Joint Research Centre is now engaged in a long-term
project to benchmark
filtering software and services, of particular significance
given the dubious value of vendor statements and many
of the government endorsements.
Filtering the Internet: A Best Practices Model is
a useful report
from the Information Society Project at Yale Law School. There's
a more comprehensive exploration in David Sobel's Filters
& Freedom: Free Speech Perspectives on Internet Content
Controls (Washington: Electronic Privacy Information
Lawrence Lessig's paper
What Things Regulate Speech: CDA 2.0 vs. Filtering
and the 1998 paper
by Lessig & Paul Resnick on Zoning Speech on the
Internet: A Legal and Technical Model are both
The Censorware Project, to the left of Lessig and bodies
such as the CDT, has produced a number of reports
on specific filters such as Bess, Cyberpatrol, X-Stop,
NetNanny, CyberSitter, Smartfilter and Websense.
Peacefire, a feisty libertarian group, has useful reports
on technical aspects of filters - underwhelming and overhyped
- and how they're implemented, eg noting that CyberSitter
blocked the TIME magazine site after the publisher
criticised its policies.
There have been few sites that offer detailed studies
in support of filtering. Filtering Facts (FF)
- now accessible through the Internet Archive - is an
example of where arguably the emphasis was on filtering,
less on the facts.
The American Civil Liberties Union published a report
on Censorship In a Box Why Blocking Software is Wrong
for Public Libraries and has been active in campaigns
against US federal legislation that ties funding of libraries
to their use of filters.
Ben Edelman's 2003 Empirical Analysis of Google SafeSearch
on the dominant search engine
- one of a set of important empirical studies - notes
are many kinds of intermediaries with the power and
ability to restrict what kinds of web content users
view. I have typically focused on traditional filtering
software (which blocks access to designated web sites
from affected schools, libraries, homes, or offices)
and on government filtering efforts (which block access
from entire countries). But a search engine - especially
a popular one like Google - can have a similar effect
for ordinary users who, without a search engine's recommendations,
have no easy way to know what is available. For many
Internet users, myself included, if a site isn't in
Google, it is essentially not on the web - so exclusion
from Google is arguably of comparable seriousness to
research indicates that SafeSearch (intended to block
"pornography and explicit sexual content") in
fact excludes access to a range of sites without any sexually-explicit
content, including the US National Middle School Association,
the front page of Northeastern University and numerous
national/local government sites.
2007 saw a shift in Australian government policy on
As of late 2007 there was a statutory requirement that
ISPs offer consumers filter software for installation
on the personal computers of those individuals. Few individuals
seem to have taken up the offer and there are questions
about the effectiveness of that filtering.
ISPs and corporate network operators (eg major businesses
and schools) independently blocked some content, reflecting
the overall content regulation regime and administrative
convenience. Such blocking included restrictions on access
by end users to erotica. It also included restrictions
on access to social network services such as Facebook,
to sites deemed as posing unacceptable risks to network
integrity (eg 'warez' sites) or as distractions to staff/customers.
Almost all ISPs filtered email, thereby reducing the volume
of spam arriving at the
In the lead-up to the 2007 national election the three
major parties announced a commitment to mandatory filtering
by ISPs. The expectation was that consumers would be able
to opt out of that filtering by formally notifying their
ISP. The default position for most web content would be
filtering at the service provider level, irrespective
of any filtering by end users. That filtering would be
based on blacklisting of particular sites, rather than
conducted 'on the fly' on the basis of characteristics
- real or supposed - of images and other web traffic.
As of December 2007 federal legislation mandates that
filters be made available to consumers, who are expected
to pay for the particular filter (either as a discrete
purchase or bundled with the provision by the ISP of connectivity).
It does not - and arguably cannot - mandate that they
are used comprehensively and effectively. There is considerable
uncertainty about the number of devices on which filters
are installed (not necessarily the same as the number
of filters sold) and the numbers that are properly maintained.
Australian government requirements
for restricted access systems - initially under the Australian
Broadcasting Authority (ABA) and now under the Australian
Communications & Media Authority (ACMA)
- are available on the ACMA site. That site also features
of Australian government "approved filters".
Australian ISPs are required to offer their customers
a blocking program from an approved list of 16 products
as part of the Approved Code of Practice.
Curiously, the ABA's criteria for choosing the filters
did not include whether they work or not. An ABA
spokesperson told us that the emphasis was on whether
the software was easy to load rather than whether it performed
as required. We consider that a more nuanced and informed
policy - embracing user education, self-regulation and
even common sense - would better achieve the government's
The filtering regime reflected two papers commissioned
by the Commonwealth government: Blocking
Content on the Internet: A Technical Perspective and
Aspects of Blocking Internet Content.
The latter report, by CSIRO, disappointed the Federal
Government - no silver bullets - but is consistent with
the concerns about the effectiveness of filters identified
by the Electronic Privacy Information Center (EPIC)
in its December 1997 report.
That document demonstrated that many of systems hyped
by the US (and Australian) government prevented access
to such cesspits as the American Red Cross, the San Diego
Zoo, Amnesty International and the Smithsonian Institution.
It also questioned the credibility of rating services,
eg NetShepherd's very problematic claim to have rated
"97% of the English language sites on the Web".
Further reports have been commissioned from CSIRO by the
community awareness body.
Local advocacy group Electronic Frontiers Australia (EFA)
offers one Australian perspective
on content rating and filtering proposals.
There is a succinct and intelligent discussion of issues
and technologies in Geoffrey Nunberg's January 2001 article
The Internet Filter Farce. Nunberg is a distinguished
science who edited The Future of the Book (Berkeley:
Uni of California Press 1996), discussed in our Electronic
The development of firewalls around sites with 'offensive'
or 'adult' content has been reflected in the growth of
adult verification or age verification systems (AVS).
They are designed to restrict access by children - or
merely by non-paying customers.
They involve placing content behind a firewall that excludes
search engines and those visitors who do not use the relevant
AVS authentication key - a membership number or password
that is issued by an AVS service to subscribers on a commercial
basis (usually through a periodic online credit card payment).
The US National Academies report cited above refers to
AVS as "placing a 'plain brown wrapper' around an
internet adult site".
As the name suggests, most schemes are age-based. They
generally use a credit card (it is assumed that the owner
of the card is a legal adult) although some are tied to
driver registration or other public databases.
Some sites have an exclusive relationship with a specific
AVS provider. Others have relationships with several providers.
The revenue model for
some sites is based on the AVS provider sharing the authentication
fee with the site operator: more members through the firewall
for a 'free' site equals more money for the operator.
Consumer adoption of AVS is unclear. Figures for the number
of subscribers to particular services are problematical.
There is little information about the number of consumers
who have more than one membership, longstanding memberships
or churn from one provider to another.
AVS provide no protection against dissemination of noncommercial
adult content (for example downloading of material from
bulletin board services, most material placed online by
amateurs or the exchange of files by two associates).
Their effectiveness is undermined by the promotion strategies
of commercial site operators - many place 'teaser' images
and other content outside the firewall and thus accessible
to minors. Other problems include -
by children to details of a credit card owned by a parent
or older sibling (or indeed having credit cards of their
tendency of some guardians to keep the membership details
in a browser's settings, not clear the cache or not
log out of a session
on public records excludes consumers outside particular
jurisdictions (eg non-US citizens) or people who don't
possess a drivers licence or other public identifier
and is more cumbersome (and thus less attractive to
operators/consumers) than card-based schemes
critics note that the age of adulthood differs from one
state to another, with site operators generally choosing
21 or 18 as the threshold and thereby denying access to
consumers under that barrier even though in some jurisdictions
it would be "perfectly acceptable and legal".
the business of blocking
Blocking access to online content - whether you're a corporation,
the Taliban or a concerned parent - has become big business,
with over 100 vendors in the US alone. There is an indication
of that industry in the Adult Content elsewhere
on this site.
Although it is competitive, most sales accrue to the ten
largest businesses. They have increasingly turned their
attention to management of content on corporate networks
rather than individual libraries, schools, households
and SMEs. Solutions for corporate networks often encompass
management of activity across intranets (eg restricting
the transmission of erotica and viruses)
monitoring outgoing messages (eg those sent to the address
of a competitor or containing documents identified with
restricting access to offensive or other content on
the web (eg blocking access to news or gambling sites)
of the academic literature about internet liberties has
concentrated on online censorship action by government
agencies in states such as Cuba and China, blocking citizen
access to news and other sites in the US, Australia and
other countries. There is been less attention to blocking
of content from developing countries. Some major US ISPs
(and some Australian ISPs) for example routinely block
email from Chinese ISPs because most outgoing traffic
appears to be spam.
technologies of freedom?
Enthusiasts have sought to offer tunnels through national
assist strengthening of civil society in regimes such
as China and Saudi Arabia
of a commitment to notions that the 'spirit of the net'
is antithetical to any restriction on information flows
of the joy of creating encryption systems or deconstructing
Much of the literature on such endeavours is pitched in
terms of 'technologies of freedom', with an expectation
that public-spirited experts in advanced economies will
provide tools for use by human rights activists and ordinary
people in repressive economies.
Provision of such tools is laudable. It is important to
recognise, however, that technology is neutral and does
not differentiate between use by peaceful protestors,
terrorists, paedophiles and commercial criminals. Systems
for anonymity and encryption are thus being used by a
range of individuals and organisations.
One of the more publicised initiatives is Freenet
free software which lets you publish and obtain information
on the Internet without fear of censorship. To achieve
this freedom, the network is entirely decentralized
and publishers and consumers of information are anonymous.
Without anonymity there can never be true freedom of
speech, and without decentralization the network will
be vulnerable to attack. ... Freenet is not just theoretical,
it has been downloaded by over 1.2 million users since
the project started, and it is used for the distribution
of censored information all over the world, including
countries such as China and the Middle East.
Six/Four, democracy activists in China and other authoritarian
states can exchange encrypted files, send e-mails, or
request Web pages without detection.
2006 Hacktivismo launched
a "browser for anonymous surfing", dedicated
to the Panchen Lama. Launching Torpark from your USB
launch a Tor circuit connection, which creates an encrypted
tunnel from your computer indirectly to a Tor exit computer,
allowing you to surf the internet anonymously.
supporter commented that
is a pretty cool idea ... very cool. Very slow, but
very cool. From what I've been told it's mostly for
people looking for beastiality [sic] porn, but you get
the idea. It's got all kinds of applications.
geek, however, lamented that
gets to find out everything I do. Especially stuff I'm
trying to hide.
accordingly reported in 2007 that they had harvested email
and instant message communications from embassies (including
that of Australia) and other bodies that appeared to believe
TOR provides comprehensive encyption. In reality the last
node through which traffic passes has to decrypt the communication
before delivery to the final destination, with the operator
of that node seeing what passes through the server.
developed by the University of Toronto's Citizen Lab,
is meant to be downloaded by a person in an 'uncensored'
nation, turning that computer into an access point.
A user in a 'restricted-access' nation such as China or
Cuba (where Communications minister Ramiro Valdes described
the net in 2007 as a "tool for global extermination")
can then access that computer through an encrypted connection,
thereafter visiting censored sites by using the Psiphon
machine as a proxy. The expectation is that there will
be no evidence on the user's computer of having viewed
censored material once that person erases their internet
history following each session online.
Psiphon is designed to be shared within trusted social
circles, with the required login and password not being
publicly advertised. That feature will inhibit efforts
to distribute the program to as many people as possible.
next part (postal