Skip to main content
Book a Demo

Preventing abuse during online calls - Part Two

Published: 06/09/2022

Author: focusgov

The second part of a series of articles looking at online safety during digital meetings.

This is part two of our blog "Preventing abuse during online calls" - you can read Part One here.


How to identify spaces which could become a target of online abusers? 

For online abuse to happen there has to be a way of sharing communication, be that via text (online comments, messaging apps...) or via online platforms such as Zoom, which provide us with real time voice/video communications.  

While creating and participating in online communities can be a very nice experience, moderation of them has to be taken seriously.  
 
It might be difficult to think of something that could help us moderate such communities so we’ll look at good practices first, and turn towards those who have created safe-spaces and find out how they’ve battled this problem. 

 

What do ‘safe spaces’ do to empower communication? 

Safe spaces are those who promote and take actions to preserve member’s security, data protection and psychological safety but also allow their members to call for moderator help when needed.  

Usually, hosts (and moderators) will be present when there is a larger gathering and monitor chat activity during calls, talks and presentations.  
 
Hosts and moderators share a responsibility to include everyone into conversations as much as possible. This responsibility is focused on empowering communications between event participants. By actively empowering our members to share their thoughts, opinions and questions we create an inclusive community where everyone feels their opinion matters.  
 

Once the community has reached high levels of communication, the Hosts and Moderators gain a new role. Their main focus becomes to serve their community by imposing restrictions for those who are recognised as offenders.  
 
This is often done by removing offender’s comments, stopping them from participating in conversations, removing them from the current events, or in the worst-case scenarios, issuing permanent removals which would stop the offender from ever accessing mentioned services in the future. 

 

What could we do to prevent abuse during online calls? 

To prevent abuse, we need to understand what can be used to commit abuse. In most cases, abusers who are already in the same space (platforms, talks, calls...) will use personally identifiable information to execute their malicious actions towards their chosen victims.  

To avoid this, we need to be careful what information we collect and what information we display to the public.  
 
It has become a standard practice to use nicknames instead of our legal names, never sharing our exact location and avoiding the use of images that include information that could be used against us e.g sharing an image of a wonderful view from our back garden that happens to show our street name in it.  
 
Most of our personal identifiable information is hidden by our software engineers and its only visible to Hosts and/or Moderators of the spaces. 

In cases where the abuse is focused on the way a victim looks e.g their race or clothes they wear, we’ll need a human input. This is where our Hosts and Moderators have a responsibility to step in and take action. 
 
Hosts and Moderators should: 

  • make sure reporting tools are available 
  • warn the offender for their malicious/abusive commentary 
  • inform the offender of an impending restriction to the service provided (e.g timeouts) 
  • execute the restriction policies and actions if the offender hasn’t stopped with abuse 
  • permanently remove the offender via online bans or removal of their account 
With semi-permanent and permanent removals we’ll often be presented with a few options we can undertake to protect our communities. There are usually ‘Kick’ and ‘Ban’ options but these can be reverted and the abuser will be able to come back and continue, sometimes under a different name/nickname. 
 
Here we could look at restricting access for their email addresses or potentially IP addresses. If the abuser is still able to revert these actions, we could look at restricting the service for the whole range of IP addresses they might be using.  
 

Where can we get further help and information? 

In scenarios where abuse is consistent and we’ve exhausted our options, the first step would be to talk to our digital teams, software engineers, hosting and internet providers, as well as local authorities. 
 
Abuse and harassment, online or in-person, are punishable offences. If you, or someone you know has been a target of it, the local authorities have made their reporting tools available.
 
To make sure the authorities have enough information to start their investigation, it is always a good idea to collect and record any (or all) examples of abuse you’ve unfortunately been faced with. 
 
Once you’re ready, you’ll be able to fill out an online form, make a phone call or visit a police station where the official investigation can take place. 
 
For more information visit UK Police advice on online harassment or if you’d like to share information anonymously, visit the Crimestoppers website for further details.