How Social Media Companies Can Better Address Online Abuse 

Published April 2017 and revised August 2018


Introduction

Glitch!UK is a new, thriving and internationally known non-for-profit organisation with a mission to end online abuse in the UK.‘ Glitch’  means a temporary malfunction with an equipment. When we look back on this period of time, we want to be able to say that the rise in online abuse was only a ‘glitch’ in our history.

There is an apparent ‘glitch’ on social media platforms where individuals know that the way social media companies are currently responding to abusive behaviour means they will most likely not be held accountable for their violent words and criminal acts. Glitch!UK calls on social media platforms, policy groups and all internet users to fix the glitch by ending online abuse for good.

We’ve developed a set of initial recommendations based on the experience of survivors of online abuse. These recommendations are focused on ensuring social media platforms are a safe place for all people to use, to express themselves free from online hate speech, harassment and personal abuse. Glitch is not about imposing restrictions on how we use social media nor censoring our right to free speech or freedom of expression.  These recommendations are solely about protecting people from and dissuading online abusers that hide behind anonymity.

Below are a summary of our five key recommendations we would like social media platforms Youtube and Twitter to consider and adopt.

1.Better Prevention through Deterrence

 Social media companies need to challenge the culture that users can expect to face abuse as part of the status quo. The chances of facing online abuse is especially high if you are a woman, person of colour (1) or a public figure. Social media companies should enforce a zero-tolerance culture to online abuse in order to deter those creating accounts solely to abuse and sow discord.

Due to the lack of consequences for poor online behaviour, there is a feeling of immunity afforded to those creating and using anonymous accounts online and thus people are emboldened to behave in ways they would not behave in the real world. Preliminary research done with London-based focus groups suggests a significantly higher level of trolling by anonymous and fake accounts on YouTube and Twitter than Instagram or Facebook. This suggests an issue around the YouTube and Twitter  business models and/or sign up process. Twitter and YouTube need to review this and address the core reasons why abuse tends to be higher on their platforms than others.

We suggest taking the following steps to execute the above:

1.1  Stronger wording in Terms of Service policies. This should include “we monitor and track all engagement on our platform”,  “cooperating fully with local law enforcement” and outlining a “one warning and then permanent suspension rule”.

1.2 Twitter’s rules of use should be clearly visible and easy to find on both the app and website.

1.3 Require users to affirmatively consent to having read a summarised version of both YouTube’s and Twitter’s Rules (4) and Terms of Service.

1.4 YouTube should follow Twitter’s lead and have clear Youtube “rules” instead of “guidelines”.

1.5 Apply a verification process similar to the Twitter Verified and YouTube Certified account process when creating an account or when given a first and final warning (see 1.1.)

1.6 Create a privacy setting to only engage with verified users and not those with an anonymous account.

2. Effective Reporting Process

Based on the experiences shared with us and reported in the media, the responses by moderators to the  reporting of racial abuse was very poor. Survivors of online abuse find the process difficult, time-consuming and upsetting. Reporting should be made easier for users and reports should be reviewed within 24 hours to prevent the abuse escalating. Leaving the abuse unmitigated for a long period of time begets more abuse this is very similar to the broken window theory (3).

2.1 Twitter and YouTube should welcome and encourage users to report rule breakers. Suggesting users should block or mute trolls should not be the recommended first step given. Users should have the option to flag inappropriate behaviour, have it reviewed within 24 hours and have a warning be sent to the flagged users by the platform, not the user. If the problematic user in question persists, they should be removed from the platform.

2.2. Create more category types for reporting abuse and give the option to select more than one category in a single report. Racial and or sexual harassment as well as sending of abusive content are different things but often happen at the same time.

2.3 YouTube, like Twitter, should provide users with the option to report more than one comment at a time.

2.4 Send an email of acknowledgement and copy of the report form to the user immediately after they send the report.

2.5 Within 24 hours review the report and follow up with the user that filed it so the user is aware of the steps the platform is taking.

2.6. Social media companies should provide follow-up support in the form of providing information on local counseling groups or a free over the phone counselling session for users who have experienced intense and traumatic abuse within their platforms.

3. Basic Transparency with Users

 The behind the scenes operations of social media companies largely remain a mystery. Greater transparency is needed. How many employees moderate reports? Are staff adequately trained and supported? Where are they based and are they working across different time zones?

3.1 Report annually on investments into new algorithms that detect certain hateful words and speech patterns.

3.2 Specify if algorithms are set to raise a red flag. For example, when a user is receiving an unexpectedly high number of engagement from other users mentioning banned words this can be flagged to a moderator who can then review the exchanges and issue a warning if necessary.

3.3 Provide greater transparency on how their reporting systems work. As users of their platforms we should know where reports go and how they are handled. Publish how many moderators are on the payroll and if employees use certain criteria when dealing with reports. Invest in using data from previous reports to prevent similar anti-social and hateful behaviour in future.

3.4. Set targets for response times and customer service satisfaction rates based on user satisfaction surveys. Review moderator response times per reporting category to optimise and improve.

4. Rebuild Trust through Communication

Speaking with several social media users who have either experienced trolling or know of someone who has, they said they no longer trust social media companies to do deal with the issue effectively. This has resulted in a growing number of people using their social media account very differently (2) or not using certain platforms at all which is a great shame. Despite these rising issues, a lot of good happens on these platforms. Social media brings strangers from different parts of the world together, helps people generate income, find love, express their talents or even be discovered by a record label. The founders of Twitter and Youtube couldn’t have foreseen how their tech innovation would change the world and most likely never intended this kind of activity to fester on their platforms. Trust can be regained but only by first making a real commitment to transparency.

4.1 Publish an annual report detailing key information on number of reports submitted, how many warnings were given, how many accounts were permanently suspended, response times to reports and user satisfaction rating of how moderators responded those who submitted a report.

5. Enforce Reasonable Retribution

Enforcing appropriate repercussions for breaking the law is a reasonable ask offline, and it should be viewed this way online too.

5.1 If a user has been issued a warning and they are still breaking the rules it is not unreasonable for platforms to remove them.

There are high numbers of ill-intentioned people who just want to abuse, harass and bully users online. I have heard countless stories involving trolls creating several new accounts to continue harassing users even though their original account was suspended.

5.2 IP addresses linked to a permanently suspended account should not be able to create an account for at least 30 days. And then they should be asked to complete a basic verification form (see 1.5).

Conclusion

The bottom line is that social media companies need to take greater responsibility for the activities that occur on their platform. YouTube and Twitter are no longer small startup tech companies, they are multi-million pound businesses. Running a company in our society means taking corporate social responsibility and taking it seriously. We welcome YouTube and Twitter to each invite us for a private meeting to discuss how to tackle these issues.

Although our five recommendations are aimed specifically at social media companies, policy and decision makers need to close legal loopholes regarding abusive online behaviour. There is also a huge need for digital citizenship provision.

Glitch!UK believes that by working together we can begin to create safer platforms for all users and fix the glitch.


Source list

  1. Pew Research; 1 in 4 black Americans have face online harassment because of their race or ethnicity: http://www.pewresearch.org/fact-tank/2017/07/25/1-in-4-black-americans-have-faced-online-harassment-because-of-their-race-or-ethnicity/
  2. Pew Research; Key takeaways on how Americans view – and experience – online harassment: http://www.pewresearch.org/fact-tank/2017/07/11/key-takeaways-online-harassment/
  3. Encyclopedia Britannica; Broken Windows Theory: https://www.britannica.com/topic/broken-windows-theory
  4. Twitter; Rules: https://help.twitter.com/en/rules-and-policies/twitter-rules