Bullying, unfortunately, is an issue faced in many schools. Not just in the playground, but increasingly, online too. Legally, all state-funded schools must have a behaviour policy in place that “includes measures to prevent all forms of bullying among pupils.”
In line with Anti-Bullying Week, safeguarding expert, Lorna Ponambalum, has composed this piece around online hate: what it is, how your school can tackle it, and your legal obligations to your students.
COVID-19 restrictions have led to children spending more time on screens using social networks, communication apps, chat rooms and online gaming. Although this has undoubtedly let them stay in touch with friends and make new connections during the pandemic, they are also being increasingly exposed to high levels of online hate.
What is online hate?
Online hate speech is any online communication or image which encourages or promotes hatred, hostility or discrimination directed against a person or group of people because of their race, religion, disability, sexual orientation, gender or gender identity.
Is hate speech a form of bullying?
Hate speech and cyberbullying often overlap. Cyberbullying is defined as repeated, unwanted, aggressive online behaviour that involves a real or apparent power imbalance.
Cyberbullying can slowly transform into hate speech when it demeans a person or group based on characteristics associated with their identity.
In the most recent study by Ofcom (in 2020), 26% of 12-15-year-olds said that, in the past year, they had come across bullying, abusive behaviour or threats online, and 19% reported seeing hate speech online.
What are the different types of online hate?
Trolling - Posting deliberately provocative or offensive messages or images on the internet in order to get attention, cause trouble, or upset someone. Posts that are created can be reposted, shared, liked or retweeted, continuously creating a cycle of hate and misery for the victim.
Messaging – Messages containing hate speech/images which can be directly or indirectly sent to the victims through texts, emails, WhatsApp, online forums and gaming sites.
Online harassment – repeated attempts to send unwanted communication or to make continuous contact to an individual or group causing distress and fear.
Baiting – To intentionally make a person angry by saying or doing things to annoy them. By deliberately provoking an individual online will lead to an angry, aggressive or emotional response from them. For example, insulting someone’s sexual preference or race.
Virtual mobbing – When a group of individuals using social media platforms or messaging apps to post comments to or about another individual, usually because they are opposed to that person’s opinions.
Where is online hate posted and how is it spread?
It has been widely reported that social media sites and apps remain the most commonly-cited source of potential harm online. Although this could be related to more time being spent online per user on social media sites in comparison to other types of sites. According to the Ofcom (2020) report, 12-15-year-olds reported that 26% of hate speech was found on Facebook in comparison with 8% on Instagram and 7% on YouTube.
Studies have reported that online hate is spread by a process of people “flocking together” online and once in a group of like-minded individuals they learn and assume the attitudes of other group members and thus spread the hate material. However, the situation is not helped by the algorithms that sites such as YouTube, Reddit and Facebook use, as these ‘recommend’ things that users might like and can lead them to hate material.
Why is hate speech so harmful?
Children and young people are particularly vulnerable to online hate as, sometimes, many are searching for groups or causes that will give them a sense of identity. Victims of online hate speech can be affected emotionally, mentally and physically.
These can include low self-worth, anxiety and feeling of insecurity, fear for their lives, and even self-harm or suicide. Others may feel embarrassed or isolated which can impact on their schooling and could lead to depression.
What does the law say about online hate?
Hate crime committed whether online or offline is illegal. But not all offensive online content is illegal in the UK. It can only be considered a crime if the online content incites hatred based on race, religion and sexual orientation.
When online material is hate-motivated but does not meet the threshold for a criminal offence it is recorded as a 'hate incident'. However, freedom of speech is a sign of democracy and the UK laws aim to protect this, so it can be a fine balance to monitor online.
There are many different UK laws aiming to tackle this. One such law is Section 4 of the Public Order Act 1986 (POA), which makes it an offence for a person to use “threatening, abusive or insulting words or behaviour that causes, or is likely to cause, another person harassment, alarm or distress”. Over the years, this law has been amended to include language that is deemed to incite “racial and religious hatred”, as well as “hatred on the grounds of sexual orientation”.
Other legislation used to outlaw hate speech includes the Terrorism Act 2006 which bans language that “encourages terrorism” and Section 127 of the Communications Act 2003 which makes it illegal to send a message that grossly offensive, or of an indecent, obscene or menacing character over a public electronic communications network.
How do social media platforms protect users from online hate?
Many social media platforms have community guidelines and specific policies on hate speech which outline what is and isn’t allowed on the platform. If a user violates these rules, their account can be blocked or removed from the platform.
Some social media platforms also use artificial intelligence as well as moderators to identify harmful content, so it can be tackled early. However, attempting to curb the increased posting of hate speech on these platforms can be a challenge as it relies on users reporting it in the first place.
What are schools doing to tackle the spread of online hate?
The Teaching Online Safety in School guidance which was published by the Department for Education (DfE) in 2019 outlines what schools should be teaching pupils in order to stay safe online, as well as providing them with opportunities to learn how to behave online. There is an emphasis throughout the guidance that the teaching should always be age and developmentally appropriate.
This guidance is complemented by the framework The Education for a Connected World which outlines the digital knowledge and skills that children and young people should have and develop at different ages and stages of their lives. The document provides teachers with a number of learning statements to help develop the teaching and learning of online safety and activity. This document can be used to support one of the main aims of the government’s Internet Safety Strategy of supporting children to stay safe and make a positive contribution online. All our lesson plans have been directly aligned to The Education for a Connected World framework.
The teaching of online hate and bullying is part of the new Relationships and Sex education (RSE) and Health Education curriculum, which is mandatory in all primary schools and secondary schools from September 2020. However, schools can delay teaching until the start of the summer term 2021 if they are not ready or are unable to meet the requirements.
How do I teach my students to tackle online hate?
There are a few things that young people can do to tackle and protect themselves from online hate and trolling. As their teacher, take an active interest in their online activity and discuss with them about how they socialise online. Encourage meaningful discussions with them to develop their critical thinking.
The discussions with students can include the following ideas:
- Make sure that they treat others with respect as that is how they would want to be treated.
- Aim to foster a climate of tolerance and inclusion to both prevent and isolate hate speech incidents and create a social norm around acceptance of all students, regardless of who they are.
- Encourage students to have an open attitude and be inquisitive about other people because some instances of hate speech are based on ignorance or false information.
- Check their understanding about online hate and ask them if they would recognise it.
- Advise students to not post online that they are being targeted.
- Advise them not to respond to hateful or threatening content if they receive it. Instead, they should report it.
- Discuss with students about how to identify the trolls – trolls are the bullies of the internet and it is important to be aware of who they are.
- Make sure they are aware of where the ‘report’ functions are available to users on the social media platforms that they use.
- Show them the Report Harmful Content website which is where you can get advice about any harmful material that is not a hate crime
- If the online abuse they receive is unlawful – suggest that they report it to the police or the Internet Watch Foundation (IWT).
How can National Online Safety support you?
National Online Safety is an award-winning hub of online safety training trusted by over 25,000 schools worldwide. We provide online safety education, training, and updates that empower the whole school community.
Our members benefit from a range of courses, webinars, explainer videos and guides on the latest statutory guidance and online risks to help keep children safe online. This includes a dedicated online bullying category which includes relevant guidance and support touching on every aspect of bullying online, from explainer videos on online banter and online hate crime, to guides on trolling and screengrabs.
Sign up to get instant access to hundreds of learning resources and give your teachers the confidence to teach online safety successfully.
Posted by Lorna Ponambalum