Tue, 11 Feb 2020 - 07:52
Viewed

Op-Ed: Digital giants must take control of murder online

During the past year we have seen a depressing and troubling pattern — a murderous rampage by a shooter who simultaneously documents his actions, with the results globally available on the internet.

The world was shocked almost a year ago with the terrible attacks on two mosques in Christchurch. A live stream of the slaughter, taken by the perpetrator, was available on Facebook for 72 minutes before being taken down — and only after New Zealand police made contact with Facebook.

We saw it in October last year with the attack on a synagogue in Halle, Germany, this time live-streamed on the Twitch platform, owned by Amazon.

And just this weekend we have seen a similar horrific mass shooting incident in Nakhon Ratchasima, Thailand, with the shooter posting numerous images and a video to Facebook before and during his massacre of 29 people.

While it seems that these acts of violence in Thailand were not captured in perpetrator-produced content — as occurred with Christchurch and Halle — this episode is yet another reminder of the risks of digital platforms being misused to amplify violence.

Today, as we mark Safer Internet Day, sadly we are seeing once again another typical feature of these incidents — a lack of transparency from the social media platform whose technology has been leveraged by these evil men.

It is essential that social media companies are transparent with users and governments around the world about the processes that take place on their platforms. They need to “lean in” — to use a term popularised by a senior executive of Facebook in another context.

Beyond the incident in Thailand, the risk of live-streaming terrorist and violent acts remains a serious issue.

Imagine if a mass murderer attached a camera to his weapon and arranged with one of our free-to-air television networks to carry a live broadcast of the shootings.

Of course — and thankfully — that is never going to happen.

Does anyone think if a TV station did this it would be allowed to retain its licence to operate as a broadcaster? It goes without saying that the Australian Communications and Media Authority would intervene immediately.

And quite apart from what the law says, no TV station would agree to provide a live broadcast of such content. Based on ordinary considerations of decency, of respect for human life, of a desire to not cause distress to audiences and of prevailing community standards, to agree to such an act would be unthinkable.

Yet the live-streaming functionality of the major social media platforms has allowed just such things to happen — and there remains a troubling lack of clarity and consistency from the platforms about what controls, if any, they have in place to prevent it.

Immediately after the Christchurch attacks, the Australian government moved quickly to impose clear requirements on the social media platforms under Australian law.

In April last year we passed laws under which digital platforms could be held criminally liable for not quickly removing abhorrent violent material (perpetrator-produced material depicting the most violent acts).

The government also established the Taskforce to Combat Terrorist and Extreme Violent Material Online to examine other possible responses.

For example, we have asked these platforms to put in place stricter checks and controls on live-streaming services to reduce the risk of the dissemination of terrorist and extreme violent material online.

It is frankly pretty surprising that a government needs to request that measures be in place to protect against the live-streaming of murder. Surely we could expect the social media platforms to establish such protections without being asked?

By contrast I want to acknowledge the work of Australia’s internet service providers, such as Telstra, Optus, Vodafone, TPG and Vocus.

These companies acted proactively immediately after the Christchurch attacks to block access to internet sites known to be hosting the appalling video of the attack — even when their legal authority to do so was somewhat unclear.

Later this year the Morrison government will introduce new online safety legislation into parliament to expand and strengthen Australia’s online safety framework, which is already world-leading. Our proposal increases expectations on digital platforms to take responsibility for the misuse of their products and services.

It also includes a new power for the ­eSafety Commissioner, should a tragedy such as Christchurch occur again, to direct internet service providers to block such access to terrorist and extreme violent content, with a view to preventing that content going viral.

And in the light of the attacks in Christchurch, Halle and now Nakhon Ratchasima, our government will be watching closely to see if the social media platforms take further effective action in response to the threat of terrorist and extreme violent material circulating online.

If there is a need for further regulatory action so that community expectations about online safety are met, our government stands ready to take it.

Paul Fletcher is the federal Communications, Cyber Safety and the Arts Minister