Following the Christchurch shooting in New Zealand, governments sprang into action to declare the internet to be the real villain. It wasn’t. And isn’t. But that didn’t stop a strange series of policies from being enacted.
The New Zealand censorship board declared footage of the shooting — captured by the shooter himself — illegal. Once it had made it illegal to share or possess, it went after those who did, resulting in at least one person being sent to prison for making the footage available online.
The Australian government followed suit. It declared the footage illegal, putting pressure on social media companies and service providers to take down uploaded copies “expeditiously.” This term wasn’t defined in the rushed legislation. Nor were companies given any guidance on what amount of time was considered “reasonable” to react to reports of uploaded footage in order to avoid $168,000 (per incident) fines. Presumably the Australian government would know reasonableness when it saw it and fine accordingly.
Companies did what they were vaguely instructed to do. So did Australian internet service providers. The Guardian reports blocking efforts began immediately, with ISPs targeting any site where the footage was hosted. To date, these efforts have resulted in the blocking of 43 websites. It appears ISPs are maintaining their own blocklists, since the government hadn’t bothered to hand down any guidance on its recently-passed “abhorrent content” law.
Months after the fact, the Australian government is finally codifying the block orders it’s issuing.
To avoid legal complications the prime minister, Scott Morrison, asked the e-safety commissioner and the internet providers to develop a protocol for the e-safety commissioner to order the websites to block access to the offending sites.
The order issued on Sunday covers just eight websites, after several stopped hosting the material, or ceased operating, such as 8chan.
To have these blocks lifted, sites have to take down the material. But the review process lags behind the takedowns. Block orders are only reviewed every six months by the e-safety commissioner’s office.
There are obviously speech concerns that aren’t being addressed by this process or the legislation that prompted these site-blocking efforts. The footage and the shooter’s manifesto are undeniably newsworthy. They are also of interest to researchers and any number of law enforcement agencies. Unilaterally declaring these illegal turns these parties into criminals. The law doesn’t appear to contain any exceptions for journalists, researchers, or anyone else who may have a legitimate reason to possess or share this content.
The Australian government is fine with this because the e-safety commissioner has unilaterally declared this content to be so bad there can be no legitimate reason for anyone to have it in their possession.
“The slippery slope argument I keep seeing [is] this is not obscene content or objectionable content [but] it’s clearly illegal. I don’t see any public interest in making this kind of material that is designed to humiliate and to incite further terrorist acts and hatred.”
Well, okay. I guess as long as a government official can’t see any public interest, there must be no public interest concerns. These blocking orders may be targeting specific content that’s fairly distinctive, but the e-safety commissioner’s statement ignores the breadth of the law, which targets far more than these two pieces of content.
The Sharing of Abhorrent Violent Material bill creates new offences for content service providers and hosting services that fail to notify the Australian federal police about or fail to expeditiously remove videos depicting “abhorrent violent conduct”. That conduct is defined as videos depicting terrorist acts, murders, attempted murders, torture, rape or kidnap.
There goes a whole lot of newsworthy content, including content that may have investigative or evidentiary value. The vagueness of the law encourages proactive efforts from social media companies, which is going to result in a lot of false positives, as well as the memory-holing of content that’s arguably of public interest, no matter how “abhorrent” that content may be.