Every week there seems to be another news article that concerns abusive messages on Twitter, ‘trolling’, or some campaign message that suffers an incredible backlash online. Earlier in January, two people pleaded guilty to sending “menacing tweets” to Caroline Criado-Perez, who successfully campaigned to have Jane Austen selected as the individual to appear on the £10 note. The two individuals were charged offences under the Communications Act 2003, s 127. More recently, Stan Collymore has spoken out against Twitter’s lack of action against abusive messages, and even closed down his account (though it has since reopened). Undoubtedly, there will be another one next week. Yet Ms Criado-Perez said that the case against the two individuals who abused her was merely a drop in the ocean compared to the abuse that she and other women have received. Add to misogynist messages those containing racial slurs, homophobic and transphobic comments, extreme religious intolerance, and all sorts of more specific malevolent words, and it is amazing there isn’t at least one story in the papers everyday. A common theme amongst all of these however, most recently noted by Collymore, is that they don’t feel Twitter does enough to protect its users.
Like every internet site, Twitter has rules on what can and can’t be said or done on it. These are Twitter’s rules and terms and services. According to the site, “targeted abuse or harassment is … a violation of the Twitter Rules and Terms of Service”. If someone wishes to, they can report someone, and Twitter will look into the matter. This blog from Glen Tickle over at Geekosystem shows how easy it is to report something, and if found to be in violation of the rules, the infringer’s account may be warned about the behaviour, or may even be suspended.
In short, while Twitter has the power to do something about abusive tweets, both the lack of clear action and the lack of a clear process makes users feel that Twitter simply won’t help them when required. We are so used to dealing with companies and organisations that set out a complaints policy requiring a response in so many days that not being able to find this information quickly, or at all, is a disconcerting experience. It’s easy to see why people feel compelled either to leave Twitter, or to call the police. It is action that they can take that will at least give them some control over the situation.
Yet both of these are far from perfect options: the first stifles someone’s voice, preventing them from exercising their freedom of expression because of someone overstepping their freedom; and the second is a waste of already-stretched resources, and the abuse is often far from capable of reaching the threshold required for a criminal sanction. As noted by Lord Nicholls in Majrowski v Guy and St Thomas’ NHS Trust  UKHL 34, a major case on harassment law that looked at the differing levels between the parallel criminal and civil liability regimes under the Protection from Harassment Act 1997, “courts are well able to recognise the boundary between conduct which is unattractive, even unreasonable, and conduct which is oppressive and unacceptable. To cross the boundary from the regrettable to the unacceptable the gravity of the misconduct must be of an order which would sustain criminal liability”.
So why doesn’t Twitter do more? It is the body with the ability to stop the problems the quickest without wasting time or resources or affecting the usage of its services by those who have suffered or who have done no wrong. However, Collymore noted in an interview with the BBC on Wednesday (22nd January 2014) that the police were “banging their heads against a brick wall” due to Twitter’s policies.
As noted on its Abusive Behaviour Policy page, “Twitter provides a global communication platform which encompasses a variety of users with different voices, ideas and perspectives. As a policy, [it does] not mediate content or intervene in disputes between users”. There seems to be a disconnect between these Rules and their terms of service, and what Twitter does, and it’s in this policy wording that the disconnect appears strongest. Given that Twitter apparently (according to Craig Smith at Digital Market Ramblings) is used to send 500 million tweets per day by 100 million daily active users, it is obvious that Twitter could never monitor the content at any realistic level. But this isn’t the excuse it gives; Twitter’s policy is predicated on the fact that people will disagree because they are different and that is all there is to it.
Twitter has a list of contacts and some advice for those receiving online abuse, but other than its report function, it appears that Twitter do very little else. This is at a high level understandable, given that its users are responsible for the content, but the constant barrage of complaints that it receives makes the advice to “turn to family and friends” a bit asinine. Twitter’s users don’t want a shoulder to cry on, they want the policies they agreed to abide to be followed by the other party, the one who created those terms of service and the rules, Twitter itself.
A further flaw with Twitter’s abuse policy is what it considers targeted abuse. Admittedly, the list is not exhaustive, and Twitter acknowledges that the facts its lists are only some factors that it takes into consideration, but all three focus only one person sending messages to another, perhaps someone who is a ‘professional troll’ or someone who uses multiple accounts. What Twitter’s terms of service and rules do not provide for is the sudden descent into mob mentality that can suddenly take over. The two in the Criado-Perez matter were not by any stretch of the imagination the only two people; Ms Criado-Perez was subject to a cascade of abuse from many directions. It is not a rule of using the Internet that you have to agree with everyone (as this from XKCD shows), but the shift of the keyboard warrior sliding from irritation to anger to abuse, and then inciting others to do the same, or threatening violence (including sexual violence) is a symptom of the freedom and anonymity that the Internet provides. Looking at the different types of people arrested and convicted of criminal activity during the riots in England in 2011 shows much of that same sense of anonymity and mob mentality.
There is also one policy document that appears to be missing that one would imagine to be fairly standard: it’s not clear at first glance how quickly Twitter reacts to the information provided. There is a short paragraph about what happens, but there is no detail about what Twitter does with that information, or how long it takes. This is especially important, as it is not clear to someone who reports an offensive account what will actually happen to stop it, or how much longer they will have to suffer the abuse. This lack of power and control through reporting abusive tweets can almost be as horrific as the abuse itself.
Is there any real solution for stopping abusive behaviour on Twitter? Unlikely. If people have accounts deleted as a result, they can just set up a new one. It is very difficult to stop them. Even with arrests and convictions for abuse on Twitter, users will keep on sending abusive messages. They know Twitter will be slow to react, and their victims do as well. Until Twitter gets swifter in reacting to complaints, this will not change. However, as the commercialisation of Twitter continues, with promoted tweets, and provision of access for paying companies to potential customers and information, Twitter may find itself having to take more of an active role to keep its users happy. A few years ago, another social media platform took the world by storm, started to wrest more control from its users without adequate protections, and look at what’s happening…