News and Publications

Tough criminal and financial penalties on the horizon for a host of cyber offences

Posted: 08/03/2022


Tech companies and bosses could face tough penalties and significant sanctions if they fail to keep their platform’s content compliant. In recent years there has been a dramatic increase in online crime, with more than 200,000 offences recorded last year, up from 114,000 two years ago.

The UK’s Online Safety Bill was introduced in May 2021 to regulate online content, particularly targeting social media companies. The first draft put a duty of care on large social media websites to remove harmful or illegal content and protect children, following a sharp rise in child abuse offences to nearly 35,000 cases since 2019 (a 16% increase). The bill promises to ensure that internet users in the UK – especially children and other vulnerable groups – are the best protected in the world.

The bill aims to achieve this protection by clarifying a number of grey areas, with ‘fit for purpose’ online offences and, importantly, by the new online regulator Ofcom holding Big Tech to account. Large fines of up to 10% of global turnover and potential personal criminal liability for senior managers who fail to discharge the new statutory ‘duty of care’ are proposed. 

Since May last year, the draft legislation has undergone scrutiny by numerous stakeholders, including the Law Commission and a number of parliamentary committees, during which time concerns were expressed about its proposals being watered-down. A parliamentary report issued on 14 December 2021, however, offered a series of recommendations to tighten the legal requirements on internet platforms, welcoming the Government’s push to go beyond self-regulation. 

The report noted: ‘The era of self-regulation for big tech has come to an end. The companies are clearly responsible for services they have designed and profit from, and need to be held to account for the decisions they make’. The new duties will apply to most online platforms that host any content that can be posted or shared by users, including Facebook, Instagram, Twitter, TikTok, Google and YouTube.

In February 2022, following numerous committee meetings, the Government announced a series of changes to the bill. These changes will include a requirement for all companies to conduct risk-assessments relating to illegal content and they will have duties about the rights to freedom of expression and privacy. In particular, services that can be accessed by children will be required to conduct ‘children’s risk assessments’, imposing a higher duty on companies.

AI, algorithms and a failure to focus on safety first

A key concern outlined in the report was that self-regulation of online services had failed and algorithms have, to date, been used to ensure as much user interaction as possible without sufficient regard to the human cost. The report emphasised the significance of the use of algorithms in spreading false facts which led to significantly harmful events, notably:

  • Intensive care admissions of unvaccinated Covid-19 patients.
  • The insurrection at the US Capitol in January 2020.
  • Teenagers viewing content that promotes self-harm, eating disorders and suicide.

Public outrage was prompted in 2019 by the role of algorithms in the suicide of 14-year old Molly Russell, following her exposure to distressing material about depression and suicide found on her Instagram account.

Given such tragic events, the new recommendations are likely to be welcomed by social media users and parents alike, as users will have more protection and parents can feel comforted that platforms are legally required to monitor online content -  particularly the clauses relating to algorithms. The changes do, however, need to be realistic for companies and provide a structured means of protecting users. This is a balancing act that the bill is attempting to address.

Accountability

The December report’s recommendations included criminal liability for tech executives to take effect immediately. However, the Government has decided to allow a grace period of two years post-implementation before senior managers, designated as the firm’s safety controller, would be exposed to criminal sanctions personally for failing to deal with repeated and systemic failings that result in a significant risk of serious harm to users.

Although clearly unpopular with the platforms, users should be comforted by this additional incentive for companies to regulate online content. This two-year grace period will allow tech companies to seek legal advice to ensure they are complying with the new laws.

One element of the draft bill that drew particular criticism was clause 11, which relates to protecting users from legal but harmful content. It caused concern over platforms having a censorship role which may be contrary to the principle of freedom of expression. In response, the report proposed the removal of this clause in its entirety and its replacement with categories of transgression that mirror illegality in the real world.

Ultimately, it was deemed necessary to go beyond the established online offences of terrorism and child sexual abuse, to protect users from online drug and weapons dealing, people smuggling, revenge porn, fraud, promoting suicide and inciting or controlling prostitution for gain. This approach has the aim of ensuring that platforms are required to deal with hate speech and threats of violence swiftly and proactively, on pain of criminal sanctions.

Upcoming changes to the bill

The Government has recently announced that it will be setting out a range of further priority offences including, but not limited to, offences relating to terrorism and child sexual abuse and exploitation. These offences will include the incitement to and threats of violence, hate crime and financial crime. By listing these offences, companies would not have to wait for secondary legislation before being required to take proactive steps to tackle this illegal content.

Conclusions

The recommendations in the report clearly highlighted the human cost of AI and algorithm use by businesses and this concern has been addressed by shifting the safety burden to online platforms and their executives to regulate online content, with clear and tangible consequences for non-compliance.

The bill is clearly trying to balance many competing interests, including the protection of consumers, freedom of speech and technology companies. It is an indication of the direction in which the draft bill will continue to develop and it will be interesting to see how the law will respond to future issues with online content.

This article has been co-written with Kate Abercromby, trainee solicitor in the commercial dispute resolution team. 


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP