Wed, 13 Jan 2021 - 11:43
Viewed

Donald Trump ban debate flags need to better police Big Tech

The decision by Twitter to suspend Donald Trump’s account in recent days has triggered a vigorous debate about content on social media, and how it is regulated and controlled. These are issues governments have been grappling with since the internet began to be widely used by consumers. Should there be limits to free speech online? If so, how should limits be set and enforced?

Australia has been at the forefront globally in establishing effective regulatory frameworks that apply to what is posted on social media. In 2015, we legislated to establish the eSafety Commissioner, a world-first government office where people can go for help if they have been the victim of online harm. In addition to removing illegal content online, such as abhorrent violent material, the commissioner has the power to order platforms to remove a range of harmful materials, including, for example, cyber-bullying directed at a child and unauthorised distribution of intimate images. It has been a practical, effective mechanism to help keep Australians safe online, with thousands of children having cyber-bullying content removed.

Last month, the government released an exposure draft of a new Online Safety Act, designed to strengthen and expand the eSafety Commissioner’s powers. Under the new Act, the commissioner would have the power to deal with serious cyber abuse directed at an Australian adult, including to direct that such content be removed if the platform did not take appropriate action after a complaint from a user. The definition of cyber abuse has been set at a higher level for adults, recognising they are more resilient than children and also to properly balance freedom of speech considerations.

The new Act would include a set of basic online safety expectations, designed to make clear the expectations of the government on behalf of the community as to what social media platforms must do to help keep Australians safe online. These would include expectations that the platforms develop community standards, terms of service and moderation procedures that are fairly and consistently implemented. One example would be the rules platforms currently apply against threats of violence online.

Another initiative by eSafety has involved working with industry to develop a Safety by Design framework. Among other things, it encourages platforms to have in place policies to ensure consistency and rigour when making decisions about user sanctions – like the ones that have caused such debate this week.

Whether online or offline, there has never been an absolute right to free speech. In the classic legal formulation, no one is free to falsely shout “fire” in a crowded theatre. Traditionally, free speech has been balanced against other considerations – such as whether speech is threatening or offensive or defamatory.

Nor is there anything new in private corporations making decisions about who is able to say what on their platforms. Traditional media outlets such as television and radio stations and newspapers routinely impose restrictions on what people are able to say. The most fundamental of these is that for your voice to be heard you need to get past a gatekeeper such as an editor or producer.

It is true that social media platforms differ from traditional media businesses in that they allow people to post content – which can then potentially be seen by millions or billions – without subjecting that content to editorial control. There is no curation or selection of the content. But that does not mean people are free to post whatever they want, without consequences.

The social media platforms have terms of use that give them the right to remove content or suspend or block accounts if their terms of use are breached. As many have observed in recent days, the way in which these content decisions are made by the platforms is not as consistent or transparent as it should be.

Compared to traditional media businesses, social media platforms to date have shown great reluctance to take responsibility for what is posted on their sites. Time after time, a site will fail to take material down, even when on any objective view it violates the site’s terms of use. This can have devastating consequences for victims of online abuse.

Requiring that social media platforms do a better and more consistent job has been a clear focus for the Morrison government. We very much support the principle that there should be a public regulatory framework within which decisions about removing content are made by social media platforms (and, if necessary, can be overridden by decisions of government).

With the existing regulatory powers of the eSafety Commissioner, and the expanded powers proposed in the recently released exposure draft of the Online Safety Act, this is a principle we are putting into practice.

This article appeared in The Australian on 13 January 2021.