Submission: Content Regulation review

Introduction

The Department of Internal Affairs has published a proposal to change how New Zealand regulates media content

Out would go the Broadcasting Standards Authority, the Media Council, age ratings for movies, the technology industry online media code, and even possibly the Advertising Standards Authority. 

In would come a new independent regulator to help industry sectors develop and implement codes of practice to control how they behave. These codes of practice would be mandatory for qualifying organisations, and non-compliance would be backed with the threat of fines.

The stated reasons for this are:

  • New Zealanders are being exposed to harmful content more than ever before.
  • There are particular concerns about bullying and harassment, young people accessing age inappropriate material, as well as the promotion of harmful behaviours such as disordered eating.
  • The split between multiple industry complaint bodies is confusing and they all work differently. The same content might have different rules depending on where it is published.

The DIA claims that there would be no substantive changes to what types of speech are illegal (e.g. child sexual abuse material, inciting racial hatred) but at the same time there would be new limitations on harmful speech, and new controls on how content platforms administer their systems.The term ‘platform’ is used to mean those either publishing their own material or providing the ability for others to do so.

It is important to note that the regulator would not act against individuals who created or posted harmful content. Rather people would have to comply with the standards of the platform they use and the regulator might penalise the platforms if they failed to enforce this.

The proposal as it stands is missing a lot of important detail. It’s mostly about the overall framework and not about the nuts and bolts of what standards would be in the legislation and the codes of practice. While the DIA seems to favour a regulatory approach of using guidance instead of control, the discussion paper brings up the possibility of using technical measures to ban both individual pieces of content and non-complying organisations.

The civil liberties perspective

The Council has a strong interest in regulation of publishing as it has a significant impact on freedom of expression. Any legislation and codes of practice will have to be ‘demonstrably justified in a free and democratic society’ as they would be creating limits on the right to freedom of expression protected in the New Zealand Bill of Rights Act. 

We know that modern politics is very concerned with how information is presented, and the abuse of censorship powers is tempting to governments wanting to control the narrative: a number of current government ministers have expressed their concerns about disinformation and misinformation, and there is a very real risk of regulation in this area enabling control over information that is inconvenient to government ministers or officials in the name of preventing the spread of misinformation or disinformation.

At the same time we’re not opposed to all regulation. For example, the current arrangements for newspaper, radio, film, books and television, still allow the existence of a media sector that provides a range of views on the issues of the day. We also recognise that democratic societies can choose to impose limits on certain types of extreme speech, such as death threats or incitement to racial violence, without significantly affecting the right to freedom of expression. Libel laws are problematic in terms of disproportionately favouring those rich enough to use them, but they are nevertheless an example of regulating misinformation and disinformation.

As always, it’s a matter of weighing up the benefits and the costs, moderated by the effectiveness and guided by inquiry into what our society’s values are. It’s also important that any regulation is procedurally fair and subject to appropriate oversight.

The NZCCL response

This is a huge topic and the proposal is quite light on detail in a number of important areas, so the Council’s response focuses on a few elements.

  • The current context
  • The codes of practice
  • The threats

The current context

DIA’s proposed reform is being developed in response to a perceived problem, and this problem is defined as harmful and unsafe content on the internet, particularly when it is accessed by children. A long list of internet harms are listed in the paper including harassment, threats of death and violence, and the carefully phrased content that “interferes in democratic processes”.

There are no significant concerns listed with content available in newspapers, or on television and radio. 

The problem with harm

The proposal talks about limiting access to harmful content – material where the experience of content causes loss or damage to rights, property, or physical, emotional and mental wellbeing.

Stopping harm sounds like a good idea but, yet again, we want to point out that not all harms caused by speech are negative. Some harms, like the economic harm caused by a stock price going down when a company is revealed to be fraudulent, or the career-ending harm experienced by a politician having their corruption exposed, are highly valuable to society.

It’s too easy to say that ‘we’ want to stop “harm” without doing the necessary work to properly identify what it is that ‘we’ really want to stop, or at least what might be appropriate to stop in a free and democratic society.

Protecting children

The proposal makes much of the dangers to our children, but many of those dangers are already either currently illegal or already banned by the major content platforms. Those content platforms that refuse to follow common standards about young people’s access to inappropriate material aren’t going to change their mind because of a law in New Zealand. 

It doesn’t help that many of our children have already learned to tell websites that they were born on 1/1/2000 and therefore age-related content restrictions don’t apply.

We all want to protect our children from harm, but at the same time we accept that we can’t and don’t wish to make all content safe for children. The Council worries about the constant framing of internet regulation as being about protecting children as it is an emotional argument that short circuits the complex reality of the situation.

The global internet

The internet is a global network with content creators, publishers and platforms located in many different countries. Some are based in New Zealand or have a presence here, while others are wholly overseas.

Our ability to impose our regulations and standards on these overseas providers is naturally limited both by jurisdictional considerations and the fact that we’re a small country that doesn’t generate much income for the providers. 

Even the larger companies who want to appear to be good corporate citizens will only cooperate to a level that doesn’t inconvenience them unduly, and it seems that we can expect the smaller content platforms with no local presence to largely ignore any requests from a New Zealand regulator.

We have already seen major content platforms such as Google and Facebook respond to attempts at other countries’ regulation of their activities by withdrawing services or blocking access to content. Would this happen here, and conversely, could the government credibly try and block access to a content platform that does not comply with the relevant code or the regulator’s instructions?

The changing internet

The internet is still rapidly changing. Companies like Tiktok have risen while older platforms like Google, Facebook and Twitter/X have moved from enthusiastic support to grudging utility. Other companies, particularly news providers, are finding out that their business models may not be working as well as they hoped. Major events like the Covid 19 pandemic and the Christchurch Massacre have already had a significant impact on how the large platforms moderate themselves. 

Moreover, the AI (artificial intelligence) revolution is already underway and the ability to flood platforms with untold waves of artificially generated users and content is once again changing how the internet works for most people. Large platforms are already facing legal challenges to how they treat their employees who work on content moderation, and some have made thousands of such employees redundant. It is difficult to see them continuing to increase the number of people they employ to moderate content as the volume of it increases. Instead, it seems likely that this will lead to an arms race of AI content vs AI moderation that will only make regulation ever more difficult. 

This rate of change means that legislation passed now may be made irrelevant by events, or at the very least they will need to be frequently updated. This is not sufficient reason in itself to give up but we should recognise that the internet is still very much a moving target.

The codes of practice

The sector-specific codes of practice are a key part of the proposal.

The proposal seems to be modelled on the Privacy Act which also has primary legislation and industry specific codes. But there is a significant difference because most businesses fall under the main Act and the additional codes are there as an extra layer of regulation for industries like health with specific requirements that the main Act doesn’t go into detail about.

The enabling legislation for content regulation would set out high level safety objectives, e.g. “children not accessing age-inappropriate content” as well as requirements around the processes used and reporting requirements.

These might then be turned into specifics in the codes of practice such as:

  • Standards for unacceptable content on the platform.
  • The use of warnings and tags on unsafe content.
  • The availability of user and parental controls to prevent exposure to certain types of content.
  • Information on algorithmic recommendation systems.
  • How to make a complaint and appeal a decision.

The intention is that the codes of practice are developed by industry in collaboration with the regulator. The public, civil society (non-government organisations, trade unions, etc), and Māori, would also engage with the regulator and participate in developing the codes. The regulator would then approve the code if it meets the requirements set out in the legislation.

The codes of practice would then be mandatory for regulated platforms – those that have over a certain number of users, or because the regulator has designated them as such. They would apply no matter where in the world the platform is run from.

Industry-led creation

The idea that the codes of practice should be created by the industry groups affected by them is ridiculous. The point of the codes is not to serve the interests of those subject to them but to achieve the goals set by the law passed by a future Parliament. Furthermore, expecting business groups to meaningfully co-design with Māori, civil society and other interested parties is wishful thinking. Their first priority will always be to serve their own interests while the proposal does nothing to stop them from only paying lip service to co-design.

A related issue is that these industry-led creation efforts will be dominated by the largest and wealthiest companies as they are the ones with the people and money. They will be able to include requirements that suit their scale and capabilities while ignoring the needs and capabilities of smaller platforms. This would lead to poor outcomes and be yet another form of anti-competitive behaviour to hinder rivals.

We also note that civil society and other interested parties would no doubt be expected to provide analysis and feedback on each code of practice for free. That is unfair, and further increases the dominance of the largest companies.

Fundamentally, though, in our system of representative democracy, Parliament should scrutinise and approve such codes of practice before they have any legal effect. If the proposals in this paper are adopted, we should also require the regulator to report annually to Parliament on the operation of the codes of practice, and create opportunities for the public to make submissions on their experience of their operation.

Multiple codes of practice

One of the stated objectives is to remove confusion by standardising the multiple complaints bodies. Having multiple codes of practice seems likely to negate the possible benefit to the public.

The proposal discusses the issue of trying to neatly categorise content platforms when they are all converging on providing articles, video, images, and audio from both their own staff and the public, but then immediately assumes that we can sort these platforms into one of multiple codes of practice. There is no guidance on how many codes of practice we might need and how platforms would decide which one best fits them.

It would replace the current system with different rules depending on where content is published, with a system that has different rules depending on where something is published and who published it.

Codes rethought

It seems clear to us that to have any chance of working, the legislation would have to be more comprehensive and less reliant on the creation of multiple codes of practice. 

The development of any codes of practice should be led by the regulator in consultation with industry and others, with civil society and Māori participation in this process funded by the state. This is the only way of ensuring that the right people are both consulted and listened to, and that the codes serve the purpose of the legislation and not the interests of the companies subject to them.

Threats

The proposal mentions freedom of expression and civil liberties more than once. It also clearly states that it is not trying to change what speech is illegal. But we need to look beyond these assurances and consider not only the intention but also how it could possibly be abused either now or in the future. 

One of the problems with creating legislative or policy frameworks is that you can say all the right things when they’re passed, but it’s too easy and politically cheap to subvert them in the future. New Zealand officials and politicians too often seem to think that we are uniquely impervious to the threats to free expression and democracy that have arisen in countries such as Hungary.

Hate speech

The Government recently abandoned the attempt at reforming New Zealand’s hate speech laws and passed the topic to the Law Commission to write yet another report on it. Reading between the lines of DIA’s proposal it seems that the department might see this as another avenue to expand our regulation of hate speech online.

But while the recent discussion papers on hate speech law reform were careful to limit the scope and focused on the most extreme forms, we already expect content platforms to take a more expansive view on what they don’t allow. This is a reasonable decision for a content platform to take as their users find examples of hate speech offensive and off-putting, but by adding the weight of a law, a code of practice, and a regulator we risk going too far in the direction of enforcing a de facto ban on offensive speech and not just hate speech.

For example, in our submission about the hate speech laws we talked about the importance of being able to criticise religion as an example of freedom of expression. But would this be allowed by the content platforms, and would their own code of practice end up suppressing it? Combining corporate standards with the weight of government regulation is a real risk to freedom of expression.

Disinformation and misinformation

A key issue recently has been the spread of misinformation and disinformation. In their talk about ‘consumer protection’ it seems that the DIA envisions that flagging false information would be one of the requirements of a code of practice. While we see this can have value, we also see it as important that we keep well away from the idea that government should decide what is and isn’t true and therefore what can be communicated. 

One of the lines in the DIA’s 2021 discussion paper about content regulation was that they were concerned about the threat of “public processes and institutions losing the trust and confidence of society”. This worried us as it seemed to imply it was appropriate to regulate some content that would undermine people’s trust in government, which quickly leads down a dark road of suppression of information that is embarrassing to government.

Just as with hate speech, corporate standards backed by regulation are more worrying in many ways than one or the other on their own. We don’t want to end up with the government regulator making decisions about what is misinformation and then punishing content platforms for not tagging or suppressing it.

The land of the long white firewall

In our earlier discussion of the global internet we talked about the problem that some or many content platforms hosted overseas may choose to ignore the regulator in distant New Zealand. The DIA proposal responds to this problem by giving the option to extend the current filter used to block images of child sexual abuse to other currently illegal materials. 

The proposal does say that this wouldn’t apply to other material that isn’t currently illegal, but it is obviously a step further in the direction of creating a national firewall that can be used to block undesirable material – and the definition of undesirable could easily change in the future. Even if we are not looking to implement a Chinese-style locked-down internet soon, we should oppose any step in that direction.

Conclusions

The DIA hasn’t done a good enough job of establishing the need for this cross-platform rewrite of content regulation. When looking at print, television, and radio the shortcomings with the current Media Council and BSA arrangements are not obvious enough to justify this replacement. One of the main arguments is that there is confusion when content is available in multiple formats, but this same confusion would continue in the new system where there are multiple codes of practice.

We accept that the internet has real issues with hate speech, misinformation and disinformation, harassment, and threats of violence, and therefore it at least makes sense to look at increased regulation as a possible solution. Indeed, it seems that the proposal is largely aimed at regulating internet content platforms and that old media are just along for the ride.

The lack of detail in the proposal makes it hard to determine just how much of a threat it would be to freedom of expression. Would it mainly be setting minimum standards for moderation processes, algorithmic transparency, and tagging of disinformation or distressing content? Or would it be pushing further into what content will be allowable, with platforms forced to suppress all sorts of content as decided by the regulator? It seems that the DIA is angling more towards the first, but the structure is also there to allow the second. Codes of practice can be changed more easily and with less public scrutiny than legislation. We should not be made to guess, nor be tolerant of legislation which fails to make a clear distinction. 

We also see that there is a definite subtext where the proposed regulation is an attempt to both reform how we handle hate speech and to impose new controls on disinformation. While the proposal repeats that it is not attempting to change what speech is illegal, it seems clear that there is an expectation that the codes of practice would oblige content platforms to ‘handle’ certain sorts of content or face punishment. This is government expansion of censorship by proxy. 

Overall, it feels like because previous attempts to propose state regulation of some speech has failed, a new route is being proposed, which enables ministers to distance themselves from the regulatory standards by outsourcing it to a regulator and codes of practice that the Government itself has not endorsed. This is unsatisfactory in our democratic system.

We also question whether this proposal would be effective. While locally based internet platforms would be compulsorily included, the dominant foreign content platforms are only ever going to comply as much as they already would. i.e. this would only bind the companies that are already making significant efforts to moderate content and stop harassment. How much does a relatively powerless NZ regulator add to this effort? If the internet was a city, this proposal is aimed at cleaning up the corporate mall while ignoring the streets and back alleys where a lot of the unsavoury activity occurs.

We suspect that the internet experience in five years time will be roughly the same regardless of whether these changes are introduced or not. It’s hard to recommend continuing work on a policy proposal whose benefits are so unlikely to be achieved.

Next Steps

The DIA proposes to take the feedback from this round and use it to further refine and develop their proposal, hopefully into something a bit more concrete. This should lead into another round of public consultation (maybe even deliberation) before being taken to the Minister to see if there is the political appetite to implement it.

We’re not sure that this review can be saved. The mistake in deciding to take the widest possible scope when there is no real justification for it is a fatal flaw, leading to reform for the sake of reform. It distorts what should be an attempt to regulate internet content in a way that tackles the major power imbalances of the larger platforms, after listening to the public genuinely deliberate on the values that should inform any action.

The main thrust of the review is regulation of content platforms on the internet. While we agree there are problems caused by how people use the internet, we’re not convinced that the proposed regulation will help. Some ideas that might be worth considering instead are:

  1. Completing the aborted review of New Zealand’s hate speech laws.
  2. Enforcing existing New Zealand laws against harassment and the making of death threats.
  3. Continuing to work with the large internet platforms and other countries on these and related issues just as is being done with the Christchurch Call.