Dear Twitter and YouTube, you need to deal with your trolls.

 

PhotoGrid_1492093975152.png

The 1st February 2016 was the first time I had ever been called a N****r. I wasn’t called a N***** once, or twice but three times in the space of 20 minutes by people online I had never met and will probably (more like hopefully) never meet in my entire life. Looking back now almost three months later, being called a N*****  was the least of my problems.
For three weeks straight I was a target of hundreds of racist internet trolls from across the UK, Europe and America.
These trolls called me several variations of “Ne*ro”, “Filthy N*****” and “N*g**ess”
They referred to me as a: “Monkey”, “Chimp”,  “Ape” and “Harambe’s cousin”
They said if I was:  “mentally retarded” and“stupid”
They told me to: “eat shit”,  “All Africans live in Mudhuts”,  “eff off back to Africa and die there you useless parasite” and… “Lol kill yourself”
They asked me : “Are you such a bitter b***h because your clit was cut off with a rusty razor?”“Which STD will end your miserable life?” and “what a giant gas chamber! When will it be commissioned?”
They hoped for:  “the next Ebola outbreak”, that I “ get lynched you stupid nog” and that “if all whites agreed that the best course of action would be to exterminate blacks, we could do it in a week.”
This was all because they disagreed with something I said at the  European Parliament hearing on the refugee crisis which was captured on video that went viral across Facebook, Instagram and then Twitter. Despite reporting the abuse to Youtube and Twitter, these internet trolls were relentlessly sending abusive comments and tweets for three weeks. It was like they were taking trolling shifts.
Eventually I gave up reporting the hate speech.
Why?
I lost confidence in YouTube and Twitter,  I was only sent two acknowledgement emails from Twitter confirming they had received my report and nothing from Youtube. I asked my friends for support on reporting the abuse and many of them then became targets of abuse. 
Reading and reporting all those hateful words was draining and time consuming. It felt like my reports were going into a massive dark void.

So what was the point of upsetting myself reading all abuse to report it and for nothing to happen?

I gave up counting all the comments. At a guesstimate I would say in 72 hours I received 150 abusive comments.
I could no longer use YouTube and Twitter as normal. I had to turn off my notifications and several times leave my phone at home to try and have a ‘peaceful’ life.
This was the response from YouTube and Twitter when the Newham Recorder informed them what was happening:
Twitter: “[we do] not comment on individual accounts ‘for privacy and security reasons’ but More information can be found under its hateful conduct policy.”
YouTube : “Hate speech has no place on the YouTube platform and we remove hateful content and comments when they’re flagged to us. People can flag inappropriate comments individually when logged in by clicking on the three dots next to the comment or through youtube.com/reportabuse.
This response was disappointing, vexing and not good enough.
My MP Lyn Brown is pretty badass. Lyn goes into serious mama bear mode when any of her constituents are in trouble. She immediately contacted Google (who own YouTube) asking them to do right by me and within 24 hours they called me and did. I am so thankful for Lyn and the lovely Policy and Press Officer at Google! However, being an Elected Councillor and having a great network of cool people kinda places me in a privileged position. Not everyone has a badass MP; or friends like Laura Bates from EverydaySexism who can use her brilliant following to galvanise brilliant women (and men) around the world to support me; or a great relationship with their local police officers to get the right things done and done quickly.  What about those that don’t have such connections?
My experience online and the response from social media companies is not unusual actually it is far too common. Many social media users, particularly and women of colour have and are being targeted by trolls online. Social media can and should do more to stop the trolls.
I want to use what was a horrendous, intimidating and menacing experience to put pressure on social media companies to do a lot more to tackle online trolling. No other social media user needs to go through what I have been dealing with-let’s change the game.

This is not about imposing restrictions on how we use social media or censoring on our human rights to both free speech as freedom of expression.  These recommendations are solely about protecting everyone from trolls who hide behind anonymity.

Trolls know that the way social media currently respond and act to abusive behaviour means they will never be held accountable for their violent words and criminal acts. These recommendations are about ensuring online platforms are a safe place for all to use, all to express themselves free from hate speech, harassment, bullying and any personal abuse.
Below are a summary of my five key recommendations I would like Youtube and Twitter to consider and adopt.

 

1. Deterrence for better prevention

Social media companies need to challenge the culture that user should expect to face abuse as part of the status quo. This is especially the case if you are a woman, person of colour or a public figure. Social media companies should enforce a zero-tolerance culture to online trolling and abuse in order to deter trolls from creating accounts to just abuse users.
1.1  Stronger wording in their Terms of Service policy. This shuld include “we monitor and track all engagement on our platform”;  “cooperating fully with local law enforcement” and outlining a “one warning and then permanent suspension rule”.
1.2 Before creating a new account, all users should be forced to tick that you have read a summarised version of both the Twitter Rules and Terms of Service.
1.3 You can only find Twitter Rules in the Terms of Service. Twitter Rules should be clearly visible on both the app and website.
1.4 YouTube should follow Twitter’s lead and have clear Youtube rules and adopt recommendation 1.3.
1.5 A basic verification process of all users similar to the YouTube Certified or Twitter Verified process. This verification process could be applied when creating an account or when given a first and final warning (see 1.1.) Or a privacy setting to only engaged with verified users and not those with anonymous account.
There are words trolls say on certain social media platforms that they know for certain they could not get away with saying offline in our society. We see significant level of trolling by anonymous and fake accounts on YouTube and Twitter but not so much Instagram or Facebook. This suggests an issue around their business model and sign up process. Twitter and Youtube need to review this and address the core reasons why ill-intentioned people think they can say these things on their platforms.

2. Effective Reporting Process

My experience of reporting racial abuse to social media companies was very poor. I found the process, difficult, time-consuming and upsetting. Reporting should be made a lot easier for users and reports should be dealt within 24 hours. This is to prevent abuse escalating. Leaving abuse for a long period of time, like in my case, 72 hours I believe attracted more trolls and abuse (like the broken window theory).
2.1 Twitter and YouTube should welcome and encourage users to report those breaking the rules. Suggesting users should block or mute trolls should not be the recommended first step given. Trolls should be warned and if persistent then removed from the platform. They evidently don’t know how to conduct themselves like other users, the users shouldn’t be told to just ignore them until the trolls find someone else to harass. 
2.2. More detailed categories for reporting different types of abuse and the option to select more than one category. Trolls both harass and send abusive content at the same time it is not one or the other.
2.3 Provide users with the option to report more than one comment at a time. Twitter have just implemented this, YouTube should do the same.
2.4 Ensure an acknowledge email and a copy of the report form is sent to the user immediately.
2.5 Social Media companies should aim to deal with serious categories of abuse within 24 hours and feedback what has happened.
2.6. Social media companies can afford more. They should provide support to users who have experienced and intense and traumatic experience.

3. Basic Transparency with User

The behind the scenes operations of social media companies remain pretty much a mystery. Greater transparency is needed. How many people work for Twitter and Youtube? How many employees moderate reports? Are staff adequately trained and supported? Where are they based and are they working across different time zones?
3.1 It should be made clear how social media companies are investing some of their multi-billion pounds to put in place new algorithms to pick up certain hateful and unlawful words.
3.2 It should also be made clear if algorithms are set to raise a red flag. For example when a user is receiving an unexpectedly high number of engagement from another users mentioning banned words this can be flagged to a moderator who can then review the exchanges and issue a warning if necessary. 
3.3 Social media companies should provide greater transparency with how their reporting system works. As users of their platforms we should know where reports then go how it is handled.Social media companies should tell us how many moderators are on the payroll and if employees use a criteria on how to deal with reports.We should know if these multi-million pound companies are providing their employee with sufficient training and support.Social media companies should tell users how data from previous reports is being used to prevent similar trolling behaviour in the future.
3.4. Social media companies should set themselves targets on responding quickly to reports and customer service based on “How did we do feedback forms”. If social media companies review how quick moderators are responding to different reporting categories, address any inconsistencies and set themselves targets to improve.

4. Communicating to rebuild trust 

Speaking with several social media users who have either experienced trolling or know of someone who has, they said they no longer trust social media companies to do deal with trolling effectively. This has resulted in a growing number of people using their social media account very differently or not using certain social media platforms at all. This is a great shame. Not only is social media becoming increasingly re-shaping how society operates the 21st Century, a lot of good does happen on social media. Social media brings strangers from different parts of the world together, helps people generate income job, find love, express their talents or even be discovered by a record label. The inventors of Twitter and Youtube couldn’t have foreseen how their tech innovation would change the world and I am sure they never intended trolls to take over their platforms. Social media companies therefore need to do more to rebuild trust with its users.
4.1 Social media companies should publish an annual report detailing key information on how many reports have been submitted, how many warnings given, how many accounts have been permanently suspended, how quickly they have dealt with reports and how moderators responded to users who submitted a report.

5. Reasonable Retribution

Social media companies enforcing appropriate comeuppance for breaking the law is a reasonable ask.
5.1 If a user has been issued a warning and they are still breaking the rules it is not unreasonable for social media companies to remove them from the platform.  
There are high numbers of ill-intentioned people who just want to abuse, harass and bully users online. I have heard countless stories involving trolls creating several new accounts to continue harassing users even though their original account was suspended.
5.2 IP addresses linked permanently suspended account should not be able to create an account for at least 30 days. And then they should be asked to complete a basic verification form (see 1.5).
Social media companies need to take greater responsibility for the activities that take place on their platform. YouTube and Twitter are no longer small start up tech companies they are a multi-million pound businesses, generating a lot of income and status for being known as one of the fastest growing social media companies.

Running a company in our society means taking corporate social responsibility and taking it seriously.

I am sick of trolls ruining our experience online and I am sick of being told to “just deal with it” and “it comes with using Twitter “. No. Youtube and Twitter have all the resources needed to make using their platform safe from everyone and dealing with the trolls. Granted some recommendations maybe harder than others to implement but due consideration should be given to each one. I welcome YouTube and Twitter to each invite me for a private meeting to discuss how my recommendations further.

 

If you have experienced online trolling and abuse or would like to add to the above recommendations please do get in touch.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s