Ministers’ failure to ban far-right extremist groups is undermining the fight against online propaganda, a report has suggested.
The ban makes sharing their material a terror offence punishable by up to 15 years’ imprisonment, whereas hateful propaganda from other groups is met with far lower sentences.
A report by the Henry Jackson Society (HJS) warned that posts by non-proscribed groups may not be properly monitored or taken down by social media companies.
Nikita Malik, director of the think tank’s Centre on Radicalisation and Terrorism, said some companies rely on government lists of banned organisations when deciding what to remove.
“The lack of far-right groups subject to proscription in the UK, when compared to Islamist groups, has left the authorities reliant on hate crime legislation rather than specific terrorist offences which carry heftier sentences,” she added.
“The government will need to keep this situation under review in a fast-moving online world, where offending causes real and significant harm.”
The report found that Islamists convicted of online offences received prison sentences three times longer, at 73.4 months on average, than their far-right counterparts, at 24.5 months, because of the current legal regime.
The sharing of hateful content online can be punished by a variety of offences, including malicious communications, hate crimes and causing “gross offence”.
But punishments are less severe than those for crimes under the Terrorism Acts, such as expressions of support for a proscribed organisation, viewing material useful to terrorists, encouraging terrorism and disseminating terrorist publications.
Takedowns have focused on Isis since it caught international security services off-guard with its use of online propaganda to inspire low-technology terror attacks around the world.
Social media companies have become increasingly adept at spotting jihadi symbols and language, but have progressed more slowly with the diverse range of indicators used by the far-right.
Counter-terror police have named the far right as Britain’s fastest-growing terror threat and online material has emerged as a motivating factor in numerous terror plots.
Ms Malik said right-wing extremists frequently used “free speech” arguments to defend themselves against potential takedowns, while Islamists can claim their freedom of religion is being impinged.
She told The Independent that right-wing extremists had become adept at using “coded language” and dogwhistles that are difficult to combat using automated flagging or removal programmes.
Holocaust denial has increasingly been replaced with what antisemites frame as a “legitimate debate” and “questions about the accuracy” of historical events, Ms Malik said.
Meanwhile, Generation Identity, which spreads the white genocide ideology behind the Christchurch shooting and other terror attacks, attempts to frame its efforts in terms of “European culture” rather than race.
“Companies are unable to take down a lot of this content because they have a list from the government of proscribed groups and these groups are simply not on it,” Ms Malik said.
“There’s no consistency between the companies.”
The report cited examples of figures who had been removed from some mainstream platforms, but allowed to remain on others.
It also named extremists, such as anti-Islam figures Pamela Geller and Robert Spencer, who had been prevented from entering Britain because of extremist concerns but are allowed to remain on Facebook, Twitter and YouTube.
While researchers acknowledged that bans can merely push figures onto smaller platforms that are harder to monitor, such as the encrypted Telegram messaging service, it said mainstream takedowns can reduce their ability to radicalise new followers.
“A lot of these people will have an audience who will listen to everything they say, a fan club, but we’re trying to reduce amplification techniques,” Ms Malik said.
“Easy-to-use social networking sites with a public audience are still the easiest way for them to reach out to new people, so they want to get their ideas across.”
The report, which was commissioned by Facebook, proposed a “harm classification system” to improve consistency across different kinds of extremism.
The system ranks people convicted of online-based extremism into six bands of threat according to 20 indicators, including audience size, the glorification of violence, prejudice towards minority groups and a lack of remorse.
When applied to extremists convicted of online offences in Britain between 2015 and 2019, two thirds of far-right offenders were in the lowest three risk bands, and a third in the highest three risk bands. In contrast, more than half of Islamists were in the top bands.
“It doesn’t matter what kind of extremism it is – it gives a very structured and fair approach online, so the platform is taking a consistent approach and can justify that,” Ms Malik said.
“I hope that social media companies take it on board.”
The report comes after the Commission for Countering Extremism called for the government to adopt a proposed definition of “hateful extremism” in order to standardise efforts across different ideologies while protecting freedom of speech.
A spokesperson for Twitter said it had removed hundreds of organisations for violating its violent extremism policy and “significantly expanded” its approach to hateful conduct.
YouTube said it had a zero-tolerance policy for hate speech and other extremist content, and had created new technology and hired experts to combat it.
Facebook spokesperson said: “Our work with groups like the Henry Jackson Society is critical to helping the industry understand and make progress on these important issues.
“It is through collaborations like these and with governments, academics and other companies through the Global Internet Forum to Counter Terrorism, that we improve our collective ability to prevent terrorists and violent extremists from exploiting digital platforms.”
A government spokesperson said: “In 2016 the government proscribed the first extreme right-wing group, National Action, and further proscribed two aliases in 2017. Proscription must be based on a belief that a group is concerned in terrorism.
“Groups that are not proscribed are not free to spread hatred and incite violence and we continue to work with companies to crack down on such content. The Online Harms White Paper detailed our intention to establish in law a new duty of care on companies, overseen by an independent regulator.”